gem_id
stringlengths
37
41
paper_id
stringlengths
3
4
paper_title
stringlengths
19
183
paper_abstract
stringlengths
168
1.38k
paper_content
dict
paper_headers
dict
slide_id
stringlengths
37
41
slide_title
stringlengths
2
85
slide_content_text
stringlengths
11
2.55k
target
stringlengths
11
2.55k
references
list
GEM-SciDuet-train-76#paper-1191#slide-2
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-2
Attentive Sequence Learning
In each decoder step i compute distribution over encoder states given the decoder state the decoder gets a context vector to decide about its output eij va tanh(Wasi Uahj) What about multiple inputs?
In each decoder step i compute distribution over encoder states given the decoder state the decoder gets a context vector to decide about its output eij va tanh(Wasi Uahj) What about multiple inputs?
[]
GEM-SciDuet-train-76#paper-1191#slide-3
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-3
Context Vector Concatenation
Attention over input sequences computed independently. Combination resolved later on in the network
Attention over input sequences computed independently. Combination resolved later on in the network
[]
GEM-SciDuet-train-76#paper-1191#slide-4
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-4
Flat Attention Combination
Importance of different inputs reflected in the joint attention distribution. one source N sources eij va tanh(Wasi Uahj) e(k) tanh(Wasi Ua(k)hj) ci ijhj ci ij U(k)a U(k)c project states to a common space Question: Should U(k)a U(k)c ? (i.e. should the projection parameters be shared?)
Importance of different inputs reflected in the joint attention distribution. one source N sources eij va tanh(Wasi Uahj) e(k) tanh(Wasi Ua(k)hj) ci ijhj ci ij U(k)a U(k)c project states to a common space Question: Should U(k)a U(k)c ? (i.e. should the projection parameters be shared?)
[]
GEM-SciDuet-train-76#paper-1191#slide-5
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-5
Hierarchical Attention Combination
Attention distribution is factored by input. Compute the context vector: j=1 ij j ij h , where using the vanilla attention Compute another attention distribution over the intermediate context vectors c(k)i and get the resulting context vector ci. e(k)i vb tanh(Wbsi U (k) As in the flat scenario, the context vectors have to be projected to a shared space. Same question arises should U(k)b U (k) c
Attention distribution is factored by input. Compute the context vector: j=1 ij j ij h , where using the vanilla attention Compute another attention distribution over the intermediate context vectors c(k)i and get the resulting context vector ci. e(k)i vb tanh(Wbsi U (k) As in the flat scenario, the context vectors have to be projected to a shared space. Same question arises should U(k)b U (k) c
[]
GEM-SciDuet-train-76#paper-1191#slide-6
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-6
Experiments and Results
Experiments conducted on multimodal translation (MMT) and automatic post-editing (APE) In both flat and hierarchical scenarios, we tried both sharing and not sharing the projection matrices. Additionally, we tried using the sentinel gate [Lu et al., 2016], which enables the decoder to decide whether or not to attend to any encoder. Experiments conducted using Neural Monkey, code available here: BLEU METEOR BLEU HTER concat. Results on the Multi30k dataset and the APE dataset. The column share denotes whether the projection matrix is shared for energies and context vector computation, sent. indicates whether the sentinel vector has been used or not.
Experiments conducted on multimodal translation (MMT) and automatic post-editing (APE) In both flat and hierarchical scenarios, we tried both sharing and not sharing the projection matrices. Additionally, we tried using the sentinel gate [Lu et al., 2016], which enables the decoder to decide whether or not to attend to any encoder. Experiments conducted using Neural Monkey, code available here: BLEU METEOR BLEU HTER concat. Results on the Multi30k dataset and the APE dataset. The column share denotes whether the projection matrix is shared for energies and context vector computation, sent. indicates whether the sentinel vector has been used or not.
[]
GEM-SciDuet-train-76#paper-1191#slide-7
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-7
Example
Source: Output with attention: ein Mannschlaft auf einemgrunenSofain einemgrunenRaum. A man sleeping in a green room on a couch. Reference: ein Mann schlaft in einem grunen Raum auf einem Sofa .
Source: Output with attention: ein Mannschlaft auf einemgrunenSofain einemgrunenRaum. A man sleeping in a green room on a couch. Reference: ein Mann schlaft in einem grunen Raum auf einem Sofa .
[]
GEM-SciDuet-train-76#paper-1191#slide-8
1191
Attention Strategies for Multi-Source Sequence-to-Sequence Learning
Modeling attention in neural multi-source sequence-to-sequence learning remains a relatively unexplored area, despite its usefulness in tasks that incorporate multiple source languages or modalities. We propose two novel approaches to combine the outputs of attention mechanisms over each source sequence, flat and hierarchical. We compare the proposed methods with existing techniques and present results of systematic evaluation of those methods on the WMT16 Multimodal Translation and Automatic Post-editing tasks. We show that the proposed methods achieve competitive results on both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Sequence-to-sequence (S2S) learning with attention mechanism recently became the most successful paradigm with state-of-the-art results in machine translation (MT) Sennrich et al., 2016a) , image captioning (Xu et al., 2015; Lu et al., 2016) , text summarization (Rush et al., 2015) and other NLP tasks.", "All of the above applications of S2S learning make use of a single encoder.", "Depending on the modality, it can be either a recurrent neural network (RNN) for textual input data, or a convolutional network for images.", "In this work, we focus on a special case of S2S learning with multiple input sequences of possibly different modalities and a single output-generating recurrent decoder.", "We explore various strategies the decoder can employ to attend to the hidden states of the individual encoders.", "The existing approaches to this problem do not explicitly model different importance of the inputs to the decoder Zoph and Knight, 2016) .", "In multimodal MT (MMT), where an image and its caption are on the input, we might expect the caption to be the primary source of information, whereas the image itself would only play a role in output disambiguation.", "In automatic post-editing (APE), where a sentence in a source language and its automatically generated translation are on the input, we might want to attend to the source text only in case the model decides that there is an error in the translation.", "We propose two interpretable attention strategies that take into account the roles of the individual source sequences explicitly-flat and hierarchical attention combination.", "This paper is organized as follows: In Section 2, we review the attention mechanism in single-source S2S learning.", "Section 3 introduces new attention combination strategies.", "In Section 4, we evaluate the proposed models on the MMT and APE tasks.", "We summarize the related work in Section 5, and conclude in Section 6.", "Attentive S2S Learning The attention mechanism in S2S learning allows an RNN decoder to directly access information about the input each time before it emits a symbol.", "Inspired by content-based addressing in Neural Turing Machines (Graves et al., 2014) , the attention mechanism estimates a probability distribution over the encoder hidden states in each decoding step.", "This distribution is used for computing the context vector-the weighted average of the encoder hidden states-as an additional input to the decoder.", "The standard attention model as described by defines the attention energies e ij , attention distribution α ij , and the con-text vector c i in i-th decoder step as: e ij = v a tanh(W a s i + U a h j ), (1) α ij = exp(e ij ) Tx k=1 exp(e ik ) , (2) c i = Tx j=1 α ij h j .", "(3) The trainable parameters W a and U a are projection matrices that transform the decoder and encoder states s i and h j into a common vector space and v a is a weight vector over the dimensions of this space.", "T x denotes the length of the input sequence.", "For the sake of clarity, bias terms (applied every time a vector is linearly projected using a weight matrix) are omitted.", "Recently, Lu et al.", "(2016) introduced sentinel gate, an extension of the attentive RNN decoder with LSTM units (Hochreiter and Schmidhuber, 1997) .", "We adapt the extension for gated recurrent units (GRU) , which we use in our experiments: ψ i = σ(W y y i + W s s i−1 ) (4) where W y and W s are trainable parameters, y i is the embedded decoder input, and s i−1 is the previous decoder state.", "Analogically to Equation 1, we compute a scalar energy term for the sentinel: e ψ i = v a tanh W a s i + U (ψ) a (ψ i s i ) (5) where W a , U (ψ) a are the projection matrices, v a is the weight vector, and ψ i s i is the sentinel vector.", "Note that the sentinel energy term does not depend on any hidden state of any encoder.", "The sentinel vector is projected to the same vector space as the encoder state h j in Equation 1.", "The term e ψ i is added as an extra attention energy term to Equation 2 and the sentinel vector ψ i s i is used as the corresponding vector in the summation in Equation 3.", "This technique should allow the decoder to choose whether to attend to the encoder or to focus on its own state and act more like a language model.", "This can be beneficial if the encoder does not contain much relevant information for the current decoding step.", "Attention Combination In S2S models with multiple encoders, the decoder needs to be able to combine the attention information collected from the encoders.", "A widely adopted technique for combining multiple attention models in a decoder is concatenation of the context vectors c (Zoph and Knight, 2016; .", "As mentioned in Section 1, this setting forces the model to attend to each encoder independently and lets the attention combination to be resolved implicitly in the subsequent network layers.", "(1) i , .", ".", ".", ", c (N ) i In this section, we propose two alternative strategies of combining attentions from multiple encoders.", "We either let the decoder learn the α i distribution jointly over all encoder hidden states (flat attention combination) or factorize the distribution over individual encoders (hierarchical combination).", "Both of the alternatives allow us to explicitly compute distribution over the encoders and thus interpret how much attention is paid to each encoder at every decoding step.", "Flat Attention Combination Flat attention combination projects the hidden states of all encoders into a shared space and then computes an arbitrary distribution over the projections.", "The difference between the concatenation of the context vectors and the flat attention combination is that the α i coefficients are computed jointly for all encoders: α (k) ij = exp(e (k) ij ) N n=1 T (n) x m=1 exp e (n) im (6) where T (n) x is the length of the input sequence of the n-th encoder and e (k) ij is the attention energy of the j-th state of the k-th encoder in the i-th decoding step.", "These attention energies are computed as in Equation 1.", "The parameters v a and W a are shared among the encoders, and U a is different for each encoder and serves as an encoder-specific projection of hidden states into a common vector space.", "The states of the individual encoders occupy different vector spaces and can have a different dimensionality, therefore the context vector cannot be computed as their weighted sum.", "We project 197 them into a single space using linear projections: c i = N k=1 T (k) x j=1 α (k) ij U (k) c h (k) j (7) where U (k) c are additional trainable parameters.", "The matrices U (k) c project the hidden states into a common vector space.", "This raises a question whether this space can be the same as the one that is projected into in the energy computation using matrices U (k) a in Equation 1, i.e., whether U (k) c = U (k) a .", "In our experiments, we explore both options.", "We also try both adding and not adding the sentinel α (ψ) i U (ψ) c (ψ i s i ) to the context vec- tor.", "Hierarchical Attention Combination The hierarchical attention combination model computes every context vector independently, similarly to the concatenation approach.", "Instead of concatenation, a second attention mechanism is constructed over the context vectors.", "We divide the computation of the attention distribution into two steps: First, we compute the context vector for each encoder independently using Equation 3.", "Second, we project the context vectors (and optionally the sentinel) into a common space (Equation 8), we compute another distribution over the projected context vectors (Equation 9) and their corresponding weighted average (Equation 10): e (k) i = v b tanh(W b s i + U (k) b c (k) i ), (8) β (k) i = exp(e (k) i ) N n=1 exp(e (n) i ) , (9) c i = N k=1 β (k) i U (k) c c (k) i (10) where c Experiments We evaluate the attention combination strategies presented in Section 3 on the tasks of multimodal translation (Section 4.1) and automatic post-editing (Section 4.2).", "The models were implemented using the Neural Monkey sequence-to-sequence learning toolkit (Helcl and Libovický, 2017) .", "12 In both setups, we process the textual input with bidirectional GRU network with 300 units in the hidden state in each direction and 300 units in embeddings.", "For the attention projection space, we use 500 hidden units.", "We optimize the network to minimize the output cross-entropy using the Adam algorithm (Kingma and Ba, 2014) with learning rate 10 −4 .", "Multimodal Translation The goal of multimodal translation is to generate target-language image captions given both the image and its caption in the source language.", "We train and evaluate the model on the Multi30k dataset .", "It consists of 29,000 training instances (images together with English captions and their German translations), 1,014 validation instances, and 1,000 test instances.", "The results are evaluated using the BLEU (Papineni et al., 2002) and ME-TEOR (Denkowski and Lavie, 2011) .", "In our model, the visual input is processed with a pre-trained VGG 16 network (Simonyan and Zisserman, 2014) without further fine-tuning.", "Atten-tion distribution over the visual input is computed from the last convolutional layer of the network.", "The decoder is an RNN with 500 conditional GRU units in the recurrent layer.", "We use byte-pair encoding (Sennrich et al., 2016b) with a vocabulary of 20,000 subword units shared between the textual encoder and the decoder.", "The results of our experiments in multimodal MT are shown in Table 1 .", "We achieved the best results using the hierarchical attention combination without the sentinel mechanism, which also showed the fastest convergence.", "The flat combination strategy achieves similar results eventually.", "Sharing the projections for energy and context vector computation does not improve over the concatenation baseline and slows the training almost prohibitively.", "Multimodal models were not able to surpass the textual baseline (BLEU 33.0).", "Using the conditional GRU units brought an improvement of about 1.5 BLEU points on average, with the exception of the concatenation scenario where the performance dropped by almost 5 BLEU points.", "We hypothesize this is caused by the fact the model has to learn the implicit attention combination on multiple places -once in the output projection and three times inside the conditional GRU unit (Firat and Cho, 2016, Equations 10-12) .", "We thus report the scores of the introduced attention combination techniques trained with conditional GRU units and compare them with the concatenation baseline trained with plain GRU units.", "Automatic MT Post-editing Automatic post-editing is a task of improving an automatically generated translation given the source sentence where the translation system is treated as a black box.", "We used the data from the WMT16 APE Task , which consists of 12,000 training, 2,000 validation, and 1,000 test sentence triplets from the IT domain.", "Each triplet contains an English source sentence, an automatically generated German translation of the source sentence, and a manually post-edited German sentence as a reference.", "In case of this dataset, the MT outputs are almost perfect in and only little effort was required to post-edit the sentences.", "The results are evaluated using the humantargeted error rate (HTER) (Snover et al., 2006) and BLEU score (Papineni et al., 2002) .", "Following Libovický et al.", "(2016) , we encode the target sentence as a sequence of edit operations transforming the MT output into the reference.", "By this technique, we prevent the model from paraphrasing the input sentences.", "The decoder is a GRU network with 300 hidden units.", "Unlike in the MMT setup (Section 4.1), we do not use the conditional GRU because it is prone to overfitting on the small dataset we work with.", "The models were able to slightly, but significantly improve over the baseline -leaving the MT output as is (HTER 24.8 ).", "The differences between the attention combination strategies are not significant.", "Related Work Attempts to use S2S models for APE are relatively rare .", "Niehues et al.", "(2016) concatenate both inputs into one long sequence, which forces the encoder to be able to work with both source and target language.", "Their attention is then similar to our flat combination strategy; however, it can only be used for sequential data.", "The best system from the WMT'16 competition (Junczys-Dowmunt and Grundkiewicz, 2016) trains two separate S2S models, one translating from MT output to post-edited targets and the second one from source sentences to post-edited targets.", "The decoders average their output distributions similarly to decoder ensembling.", "The biggest source of improvement in this state-of-theart posteditor came from additional training data generation, rather than from changes in the network architecture.", "Source: a man sleeping in a green room on a couch .", "Reference: ein Mann schläft in einem grünen Raum auf einem Sofa .", "Output with attention: e i n M a n n s c h l ä f t a u f e i n e m g r ü n e n S o f a i n e i n e m g r ü n e n R a u m .", "(1) (2) (3) (1) source, (2) image, (3) sentinel Figure 2 : Visualization of hierarchical attention in MMT.", "Each column in the diagram corresponds to the weights of the encoders and sentinel.", "Note that the despite the overall low importance of the image encoder, it gets activated for the content words.", "Caglayan et al.", "(2016) used an architecture very similar to ours for multimodal translation.", "They made a strong assumption that the network can be trained in such a way that the hidden states of the encoder and the convolutional network occupy the same vector space and thus sum the context vectors from both modalities.", "In this way, their multimodal MT system (BLEU 27.82) remained far bellow the text-only setup (BLEU 32.50).", "New state-of-the-art results on the Multi30k dataset were achieved very recently by Calixto et al.", "(2017) .", "The best-performing architecture uses the last fully-connected layer of VGG-19 network (Simonyan and Zisserman, 2014) as decoder initialization and only attends to the text encoder hidden states.", "With a stronger monomodal baseline (BLEU 33.7), their multimodal model achieved a BLEU score of 37.1.", "Similarly to Niehues et al.", "(2016) in the APE task, even further improvement was achieved by synthetically extending the dataset.", "Conclusions We introduced two new strategies of combining attention in a multi-source sequence-to-sequence setup.", "Both methods are based on computing a joint distribution over hidden states of all encoders.", "We conducted experiments with the proposed strategies on multimodal translation and automatic post-editing tasks, and we showed that the flat and hierarchical attention combination can be applied to these tasks with maintaining competitive score to previously used techniques.", "Unlike the simple context vector concatenation, the introduced combination strategies can be used with the conditional GRU units in the decoder.", "On top of that, the hierarchical combination strategy exhibits faster learning than than the other strategies." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "5", "6" ], "paper_header_content": [ "Introduction", "Attentive S2S Learning", "Attention Combination", "Flat Attention Combination", "Hierarchical Attention Combination", "Experiments", "Multimodal Translation", "Automatic MT Post-editing", "Related Work", "Conclusions" ] }
GEM-SciDuet-train-76#paper-1191#slide-8
Conclusions
The results show both methods achieve comparable results to the existing approach (concatenation of the context vectors). Hierarchical attention combination achieved best results on MMT, and is faster to train. Both methods provide a trivial way to inspect the attention distribution w.r.t. the individual inputs. Thank you for your attention!
The results show both methods achieve comparable results to the existing approach (concatenation of the context vectors). Hierarchical attention combination achieved best results on MMT, and is faster to train. Both methods provide a trivial way to inspect the attention distribution w.r.t. the individual inputs. Thank you for your attention!
[]
GEM-SciDuet-train-77#paper-1192#slide-0
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-0
NMT is all the rage
Driving the current state-of-the-art (Sennrich et al., 2016) Widely adopted by the industry
Driving the current state-of-the-art (Sennrich et al., 2016) Widely adopted by the industry
[]
GEM-SciDuet-train-77#paper-1192#slide-2
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-2
syntax was all the rage
The previous state-of-the- art was syntax-based SMT Can we bring the benefits of syntax into the recent neural systems? From Rico Sennrich, NMT: Breaking the Performance Plateau, 2016 i.e. systems that used linguistic information (usually represented as parse trees) From Williams, Sennrich, Post & Koehn (2016), Syntax-based Statistical Machine Translation Beaten by NMT in 2016
The previous state-of-the- art was syntax-based SMT Can we bring the benefits of syntax into the recent neural systems? From Rico Sennrich, NMT: Breaking the Performance Plateau, 2016 i.e. systems that used linguistic information (usually represented as parse trees) From Williams, Sennrich, Post & Koehn (2016), Syntax-based Statistical Machine Translation Beaten by NMT in 2016
[]
GEM-SciDuet-train-77#paper-1192#slide-3
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-3
syntax Constituency Structure
A Constituency (a.k.a Phrase-Structure) grammar defines a set of rewrite rules which describe the structure of the language. Groups words into larger units (constituents) Defines a hierarchy between constituents Draws relations between different constituents (words, phrases, clauses)
A Constituency (a.k.a Phrase-Structure) grammar defines a set of rewrite rules which describe the structure of the language. Groups words into larger units (constituents) Defines a hierarchy between constituents Draws relations between different constituents (words, phrases, clauses)
[]
GEM-SciDuet-train-77#paper-1192#slide-4
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-4
Why Syntax Can Help MT
Hints as to which word sequences belong together Helps in producing well structured sentences Allows informed reordering decisions according to the syntactic structure Encourages long-distance dependencies when selecting translations
Hints as to which word sequences belong together Helps in producing well structured sentences Allows informed reordering decisions according to the syntactic structure Encourages long-distance dependencies when selecting translations
[]
GEM-SciDuet-train-77#paper-1192#slide-6
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-6
Our Approach String to Tree NMT
Main idea: translate a source sentence into a linearized tree of the target sentence Inspired by works on RNN-based syntactic parsing (Vinyals et. al, 2015, Choe & Charniak, 2016) Allows using the seq2seq framework as-is
Main idea: translate a source sentence into a linearized tree of the target sentence Inspired by works on RNN-based syntactic parsing (Vinyals et. al, 2015, Choe & Charniak, 2016) Allows using the seq2seq framework as-is
[]
GEM-SciDuet-train-77#paper-1192#slide-7
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-7
Experimental Details
We used the Nematus toolkit (Sennrich et al. 2017) Joint BPE segmentation (Sennrich et al. 2016) For training, we parse the target side using the BLLIP parser (McClosky, Charniak and Johnson, 2006) Requires some care about making BPE, Tokenization and Parser work together
We used the Nematus toolkit (Sennrich et al. 2017) Joint BPE segmentation (Sennrich et al. 2016) For training, we parse the target side using the BLLIP parser (McClosky, Charniak and Johnson, 2006) Requires some care about making BPE, Tokenization and Parser work together
[]
GEM-SciDuet-train-77#paper-1192#slide-8
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-8
Experiments Large Scale
German to English, 4.5 million parallel training sentences from WMT16 Train two NMT models using the same setup (same settings as the SOTA neural system in WMT16) The syntax-aware model performs better in terms of BLEU
German to English, 4.5 million parallel training sentences from WMT16 Train two NMT models using the same setup (same settings as the SOTA neural system in WMT16) The syntax-aware model performs better in terms of BLEU
[]
GEM-SciDuet-train-77#paper-1192#slide-9
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-9
Experiments Low Resource
The syntax-aware model performs better in terms of BLEU in all cases (12 comparisons) Up to 2+ BLEU improvement
The syntax-aware model performs better in terms of BLEU in all cases (12 comparisons) Up to 2+ BLEU improvement
[]
GEM-SciDuet-train-77#paper-1192#slide-10
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-10
Accurate Trees
99% of the predicted trees in the development set had valid bracketing Eye-balling the predicted trees found them well-formed and following
99% of the predicted trees in the development set had valid bracketing Eye-balling the predicted trees found them well-formed and following
[]
GEM-SciDuet-train-77#paper-1192#slide-11
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-11
Where Syntax Helps Alignments
The attention based model induces soft alignments between the source and the target The syntax-aware model produced more sensible alignments
The attention based model induces soft alignments between the source and the target The syntax-aware model produced more sensible alignments
[]
GEM-SciDuet-train-77#paper-1192#slide-12
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-12
Attending to Source syntax
We inspected the attention weights during the production of the trees opening brackets The model consistently attends to the main verb (hatte") or to structural markers (question marks, hyphens) in the source sentence Indicates the system implicitly learns source syntax to some and possibly plans the decoding accordingly
We inspected the attention weights during the production of the trees opening brackets The model consistently attends to the main verb (hatte") or to structural markers (question marks, hyphens) in the source sentence Indicates the system implicitly learns source syntax to some and possibly plans the decoding accordingly
[]
GEM-SciDuet-train-77#paper-1192#slide-14
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-14
Structure 1 Reordering
German to English translation requires a significant amount of reordering during translation Quantifying reordering shows that the syntax-aware system performs more reordering during the training process We would like to interpret the increased reordering from a syntactic perspective We extract GHKM rules (Galley et al., 2004) from the dev set using the predicted trees and attention-induced alignments The most common rules reveal linguistically sensible transformations, like moving the verb from the end of a German constituent to the beginning of the matching English one More examples in the paper
German to English translation requires a significant amount of reordering during translation Quantifying reordering shows that the syntax-aware system performs more reordering during the training process We would like to interpret the increased reordering from a syntactic perspective We extract GHKM rules (Galley et al., 2004) from the dev set using the predicted trees and attention-induced alignments The most common rules reveal linguistically sensible transformations, like moving the verb from the end of a German constituent to the beginning of the matching English one More examples in the paper
[]
GEM-SciDuet-train-77#paper-1192#slide-15
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-15
Structure II Relative Constructions
A common linguistic structure is relative constructions, i.e. The XXX which YYY, A XXX whose YYY The words that connect the clauses in such constructions are called relative pronouns, i.e. who, which, whom The syntax-aware system produced more relative pronouns due to the syntactic context Guangzhou, das in Deutschland auch Kanton genannt wird Guangzhou, which is also known as Canton in Germany Guangzhou, also known in Germany, is one of Guangzhou, which is also known as the canton in Germany, Zugleich droht der stark von internationalen Firmen abhangigen At the same time, the image of the region, which is heavily reliant on international companies At the same time, the region's heavily dependent region At the same time, the region, which is heavily dependent on international firms
A common linguistic structure is relative constructions, i.e. The XXX which YYY, A XXX whose YYY The words that connect the clauses in such constructions are called relative pronouns, i.e. who, which, whom The syntax-aware system produced more relative pronouns due to the syntactic context Guangzhou, das in Deutschland auch Kanton genannt wird Guangzhou, which is also known as Canton in Germany Guangzhou, also known in Germany, is one of Guangzhou, which is also known as the canton in Germany, Zugleich droht der stark von internationalen Firmen abhangigen At the same time, the image of the region, which is heavily reliant on international companies At the same time, the region's heavily dependent region At the same time, the region, which is heavily dependent on international firms
[]
GEM-SciDuet-train-77#paper-1192#slide-16
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-16
Human Evaluation
We performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in newstest 2015 Two turkers per sentence The syntax-aware translations had an advantage over the baseline 2bpe better neutral 2tree better
We performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in newstest 2015 Two turkers per sentence The syntax-aware translations had an advantage over the baseline 2bpe better neutral 2tree better
[]
GEM-SciDuet-train-77#paper-1192#slide-17
1192
Towards String-to-Tree Neural Machine Translation
We present a simple method to incorporate syntactic information about the target language in a neural machine translation system by translating into linearized, lexicalized constituency trees. Experiments on the WMT16 German-English news translation task shown improved BLEU scores when compared to a syntax-agnostic NMT baseline trained on the same dataset. An analysis of the translations from the syntax-aware system shows that it performs more reordering during translation in comparison to the baseline. A smallscale human evaluation also showed an advantage to the syntax-aware system.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130 ], "paper_content_text": [ "Introduction and Model Neural Machine Translation (NMT) (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014) has recently became the state-of-the-art approach to machine translation (Bojar et al., 2016) , while being much simpler than the previously dominant phrase-based statistical machine translation (SMT) approaches (Koehn, 2010) .", "NMT models usually do not make explicit use of syntactic information about the languages at hand.", "However, a large body of work was dedicated to syntax-based SMT (Williams et al., 2016) .", "One prominent approach to syntaxbased SMT is string-to-tree (S2T) translation Knight, 2001, 2002) , in which a sourcelanguage string is translated into a target-language tree.", "S2T approaches to SMT help to ensure the resulting translations have valid syntactic structure, while also mediating flexible reordering between the source and target languages.", "The main formalism driving current S2T SMT systems is GHKM rules (Galley et al., 2004 (Galley et al., , 2006 , which are synchronous transduction grammar (STSG) fragments, extracted from word-aligned sentence pairs with syntactic trees on one side.", "The GHKM translation rules allow flexible reordering on all levels of the parse-tree.", "We suggest that NMT can also benefit from the incorporation of syntactic knowledge, and propose a simple method of performing string-to-tree neural machine translation.", "Our method is inspired by recent works in syntactic parsing, which model trees as sequences Choe and Charniak, 2016) .", "Namely, we translate a source sentence into a linearized, lexicalized constituency tree, as demonstrated in Figure 2 .", "Figure 1 shows a translation from our neural S2T model compared to one from a vanilla NMT model for the same source sentence, as well as the attention-induced word alignments of the two models.", "Figure 1 : Top -a lexicalized tree translation predicted by the bpe2tree model.", "Bottom -a translation for the same sentence from the bpe2bpe model.", "The blue lines are drawn according to the attention weights predicted by each model.", "Note that the linearized trees we predict are different in their structure from those in as instead of having part of speech tags as terminals, they contain the words of the translated sentence.", "We intentionally omit the POS informa-Jane hatte eine Katze .", "→ ( ROOT ( S ( N P Jane ) N P ( V P had ( N P a cat ) N P ) V P . )", "S ) ROOT Figure 2 : An example of a translation from a string to a linearized, lexicalized constituency tree.", "tion as including it would result in significantly longer sequences.", "The S2T model is trained on parallel corpora in which the target sentences are automatically parsed.", "Since this modeling keeps the form of a sequence-to-sequence learning task, we can employ the conventional attention-based sequence to sequence paradigm (Bahdanau et al., 2014) as-is, while enriching the output with syntactic information.", "Related Work Some recent works did propose to incorporate syntactic or other linguistic knowledge into NMT systems, although mainly on the source side: Eriguchi et al.", "(2016a,b) replace the encoder in an attention-based model with a Tree-LSTM (Tai et al., 2015) over a constituency parse tree; Bastings et al.", "(2017) encoded sentences using graph-convolutional networks over dependency trees; Sennrich and Haddow (2016) proposed a factored NMT approach, where each source word embedding is concatenated to embeddings of linguistic features of the word; Luong et al.", "(2015) incorporated syntactic knowledge via multi-task sequence to sequence learning: their system included a single encoder with multiple decoders, one of which attempts to predict the parse-tree of the source sentence; Stahlberg et al.", "(2016) proposed a hybrid approach in which translations are scored by combining scores from an NMT system with scores from a Hiero (Chiang, 2005 (Chiang, , 2007 system.", "Shi et al.", "(2016) explored the syntactic knowledge encoded by an NMT encoder, showing the encoded vector can be used to predict syntactic information like constituency trees, voice and tense with high accuracy.", "In parallel and highly related to our work, Eriguchi et al.", "(2017) proposed to model the target syntax in NMT in the form of dependency trees by using an RNNG-based decoder (Dyer et al., 2016) , while Nadejde et al.", "(2017) incorporated target syntax by predicting CCG tags serialized into the target translation.", "Our work differs from those by modeling syntax using constituency trees, as was previously common in the \"traditional\" syntaxbased machine translation literature.", "Experiments & Results Experimental Setup We first experiment in a resource-rich setting by using the German-English portion of the WMT16 news translation task (Bojar et al., 2016) , with 4.5 million sentence pairs.", "We then experiment in a low-resource scenario using the German, Russian and Czech to English training data from the News Commentary v8 corpus, following Eriguchi et al.", "(2017) .", "In all cases we parse the English sentences into constituency trees using the BLLIP parser (Charniak and Johnson, 2005) .", "1 To enable an open vocabulary translation we used sub-word units obtained via BPE (Sennrich et al., 2016b) on both source and target.", "2 In each experiment we train two models.", "A baseline model (bpe2bpe), trained to translate from the source language sentences to English sentences without any syntactic annotation, and a string-to-linearized-tree model (bpe2tree), trained to translate into English linearized constituency trees as shown in Figure 2 .", "Words are segmented into sub-word units using the BPE model we learn on the raw parallel data.", "We use the NEMATUS 3 implementation of an attention-based NMT model.", "4 We trained the models until there was no improvement on the development set in 10 consecutive checkpoints.", "Note that the only difference between the baseline and the bpe2tree model is the syntactic information, as they have a nearly-identical amount of model parameters (the only additional parameters to the syntax-aware system are the embeddings for the brackets of the trees).", "For all models we report results of the best performing single model on the dev-set (new-stest2013+newstest2014 in the resource rich setting, newstest2015 in the rest, as measured by BLEU) when translating newstest2015 and new-stest2016, similarly to Sennrich et al.", "(2016a) ; Eriguchi et al.", "(2017) .", "To evaluate the string-totree translations we derive the surface form by removing the symbols that stand for non-terminals in the tree, followed by merging the sub-words.", "We also report the results of an ensemble of the last 5 checkpoints saved during each model training.", "We compute BLEU scores using the mteval-v13a.pl script from the Moses toolkit (Koehn et al., 2007) .", "Results As shown in Table 1 , for the resource-rich setting, the single models (bpe2bpe, bpe2tree) perform similarly in terms of BLEU on newstest2015.", "On newstest2016 we witness an advantage to the bpe2tree model.", "A similar trend is found when evaluating the model ensembles: while they improve results for both models, we again see an advantage to the bpe2tree model on newstest2016.", "Table 2 shows the results in the low-resource setting, where the bpe2tree model is consistently better than the bpe2bpe baseline.", "We find this interesting as the syntax-aware system performs a much harder task (predicting trees on top of the translations, thus handling much longer output sequences) while having a nearly-identical amount of model parameters.", "In order to better understand where or how the syntactic information improves translation quality, we perform a closer analysis of the WMT16 experiment.", "Analysis The Resulting Trees Our model produced valid trees for 5970 out of 6003 sentences in the development set.", "While we did not perform an in-depth error-analysis, the trees seem to follow the syntax of English, and most choices seem reasonable.", "Quantifying Reordering English and German differ in word order, requiring a significant amount of reordering to generate a fluent translation.", "A major benefit of S2T models in SMT is facilitating reordering.", "Does this also hold for our neural S2T model?", "We compare the amount of reordering in the bpe2bpe and bpe2tree models using a distortion score based on the alignments derived from the attention weights of the corresponding systems.", "We first convert the attention weights to hard alignments by taking for each target word the source word with highest attention weight.", "For an n-word target sentence t and source sentence s let a(i) be the position of the source word aligned to the target word in position i.", "We define: d(s, t) = 1 n n i=2 |a(i) − a(i − 1)| For example, for the translations in Figure 1 , the above score for the bpe2tree model is 2.73, while the score for the bpe2bpe model is 1.27 as the bpe2tree model did more reordering.", "Note that for the bpe2tree model we compute the score only on tokens which correspond to terminals (words or sub-words) in the tree.", "We compute this score for each source-target pair on newstest2015 for each model.", "Figure 3 shows a histogram of the binned score counts.", "The bpe2tree model has more translations with distortion scores in bins 1onward and significantly less translations in the least-reordering bin (0) when compared to the bpe2bpe model, indicating that the syntactic information encouraged the model to perform more reordering.", "5 Figure 4 tracks the distortion scores throughout the learning process, plotting the average dev-set scores for the model checkpoints saved every 30k updates.", "Interestingly, both models obey to the following trend: open with a relatively high distortion score, followed by a steep decrease, and from there ascend gradually.", "The bpe2tree model usually has a higher distortion score during training, as we would expect after our previous findings from Figure 3 .", "Tying Reordering and Syntax The bpe2tree model generates translations with their constituency tree and their attention-derived alignments.", "We can use this information to extract GHKM rules (Galley et al., 2004) .", "6 We derive Table 4 : Translation examples from newstest2015.", "The underlines correspond to the source word attended by the first opening bracket (these are consistently the main verbs or structural markers) and the target words this source word was most strongly aligned to.", "See the supplementary material for an attention weight matrix example when predicting a tree ( Figure 6 ) and additional output examples.", "hard alignments for that purpose by treating every source/target token-pair with attention score above 0.5 as an alignment.", "Extracting rules from the dev-set predictions resulted in 233,657 rules, where 22,914 of them (9.8%) included reordering, i.e.", "contained variables ordered differently in the source and the target.", "We grouped the rules by their LHS (corresponding to a target syntactic structure), and sorted them by the total number of RHS (corresponding to a source sequential structure) with reordering.", "Table 3 shows the top 10 extracted LHS, together with the top-5 RHS, for each rule.", "The most common rule, VP(x 0 :TER x 1 :NP) → x 1 x 0 , found in 184 sentences in the dev set (8.4%), is indicating that the sequence x 1 x 0 in German was reordered to form a verb phrase in English, in which x 0 is a terminal and x 1 is a noun phrase.", "The extracted GHKM rules reveal very sensible German-English reordering patterns.", "Relative Constructions Browsing the produced trees hints at a tendency of the syntax-aware model to favor using relative-clause structures and subordination over other syntactic constructions (i.e., \"several cameras that are all priced...\" vs. \"several cameras, all priced...\").", "To quantify this, we count the English relative pronouns (who, which, that 7 , whom, whose) found in the newstest2015 translations of each model and in the reference translations, as shown in Figure 5 .", "The bpe2tree model produces more relative constructions compared to the bpe2bpe model, and both models produce more such constructions than found in the reference.", "Main Verbs While not discussed until this point, the generated opening and closing brackets also have attention weights, providing another opportunity to to peak into the model's behavior.", "Figure 6 in the supplementary material presents an example of a complete attention matrix, including the syntactic brackets.", "While making full sense of the attention patterns of the syntactic elements remains a challenge, one clear trend is that opening the very first bracket of the sentence consistently attends to the main verb or to structural markers (i.e.", "question marks, hyphens) in the source sentence, suggesting a planning-ahead behavior of the decoder.", "The underlines in Table 4 correspond to the source word attended by the first opening bracket, and the target word this source word was most strongly aligned to.", "In general, we find the alignments from the syntax-based system more sensible (i.e.", "in Figure 1 -the bpe2bpe alignments are off-by-1).", "Qualitative Analysis and Human Evaluations The bpe2tree translations read better than their bpe2bpe counterparts, both syntactically and semantically, and we highlight some examples which demonstrate this.", "Table 4 lists some representative examples, highlighting improvements that correspond to syntactic phenomena involving reordering or global structure.", "We also performed a small-scale human-evaluation using mechanical turk on the first 500 sentences in the dev-set.", "Further details are available in the supplementary material.", "The results are summarized in the following table: 2bpe weakly better 100 2bpe strongly better 54 2tree weakly better 122 2tree strongly better 64 both good 26 both bad 3 disagree 131 As can be seen, in 186 cases (37.2%) the human evaluators preferred the bpe2tree translations, vs. 154 cases (30.8%) for bpe2bpe, with the rest of the cases (30%) being neutral.", "Conclusions and Future Work We present a simple string-to-tree neural translation model, and show it produces results which are better than those of a neural string-to-string model.", "While this work shows syntactic information about the target side can be beneficial for NMT, this paper only scratches the surface with what can be done on the subject.", "First, better models can be proposed to alleviate the long sequence problem in the linearized approach or allow a more natural tree decoding scheme (Alvarez-Melis and Jaakkola, 2017) .", "Comparing our approach to other syntax aware NMT models like Eriguchi et al.", "(2017) and Nadejde et al.", "(2017) may also be of interest.", "A Contrastive evaluation (Sennrich, 2016) of a syntax-aware system vs. a syntax-agnostic system may also shed light on the benefits of incorporating syntax into NMT.", "A Supplementary Material Data The English side of the corpus was tokenized (into Penn treebank format) and truecased using the scripts provided in Moses (Koehn et al., 2007) .", "We ran the BPE process on a concatenation of the source and target corpus, with 89500 BPE operations in the WMT experiment and with 45k operations in the other experiments.", "This resulted in an input vocabulary of 84924 tokens and an output vocabulary of 78499 tokens in the WMT16 experiment.", "The linearized constituency trees are obtained by simply replacing the POS tags in the parse trees with the corresponding word or subwords.", "The output vocabulary in the bpe2tree models includes the target subwords and the tree symbols which correspond to an opening or closing of a specific phrase type.", "Hyperparameters The word embedding size was set to 500/256 and the encoder and decoder sizes were set to 1024/256 (WMT16/other experiments).", "For optimization we used Adadelta (Zeiler, 2012) with minibatch size of 40.", "For decoding we used beam search with a beam size of 12.", "We trained the bpe2tree WMT16 model on sequences with a maximum length of 150 tokens (the average length for a linearized tree in the training set was about 50 tokens).", "It was trained for two weeks on a single Nvidia TitanX GPU.", "The bpe2bpe WMT16 model was trained on sequences with a maximum length of 50 tokens, and with minibatch size of 80.", "It was trained for one week on a single Nvidia TitanX GPU.", "Only in the low-resource experiments we applied dropout as described in Sennrich et al.", "(2016a) for Romanian-English.", "Human Evaluation We performed humanevaluation on the Mechnical Turk platform.", "Each sentence was evaluated using two annotators.", "For each sentence, we presented the annotators with the English reference sentence, followed by the outputs of the two systems.", "The German source was not shown, and the two system's outputs were shown in random order.", "The annotators were instructed to answer \"Which of the two sentences, in your view, is a better portrayal of the the reference sentence.\"", "They were then given 6 options: \"sent 1 is better\", \"sent 2 is better\", \"sent 1 is a little better\", \"sent 2 is a little better\", \"both sentences are equally good\", \"both sentences are equally bad\".", "We then ignore differences between \"better\" and \"a little better\".", "We count as \"strongly better\" the cases where both annotators indicated the same sentence as better, as \"weakly better\" the cases were one annotator chose a sentence and the other indicated they are both good/bad.", "Other cases are treated as either \"both good\" / \"both bad\" or as disagreements.", "Figure 6 : The attention weights for the string-totree translation in Figure 1 Additional Output Examples from both models, in the format of Figure 1 .", "Notice the improved translation and alignment quality in the tree-based translations, as well as the overall high structural quality of the resulting trees.", "The few syntactic mistakes in these examples are attachment errors of SBAR and PP phrases, which will also challenge dedicated parsers." ] }
{ "paper_header_number": [ "1", "2", "3", "4" ], "paper_header_content": [ "Introduction and Model", "Experiments & Results", "Analysis", "Conclusions and Future Work" ] }
GEM-SciDuet-train-77#paper-1192#slide-17
Conclusions
Neural machine translation can clearly benefit from target-side syntax Other recent work include: A general approach - can be easily incorporated into other neural language generation tasks like summarization, image caption generation Larger picture: dont throw away your linguistics! Neural systems can also leverage symbolic linguistic information
Neural machine translation can clearly benefit from target-side syntax Other recent work include: A general approach - can be easily incorporated into other neural language generation tasks like summarization, image caption generation Larger picture: dont throw away your linguistics! Neural systems can also leverage symbolic linguistic information
[]
GEM-SciDuet-train-78#paper-1203#slide-0
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-0
Reviews in recommender system
Wr We We W Great purchase. Works fast and has all the applications ... Wok We We Great unit! By Thinking Independently on April 12, 2018 Style: Tablet Verified Purchase This is a great tablet! Setup was super easy, | had it going in short order. I've been using it a lot with no issues. Battery life is great, lasts a few hours with constant use. | don't know how long the battery would last in standby mode because | haven't left it unused for more than a few hours. So much more reliable so far than my laptop which was more expensive. Great purchase. Works fast and has all the applications | might need for general work, school and play. The Samsung Galaxy Tab E 16 GB WiFi works fine for me, it does everything | want it to do. I'm happy with my purchase and would buy it again. 5 people found this helpful Helpful | | Not Helpful | > Comment Report abuse | Helpful | | Not Helpfut > Comment Report abuse The screen is nice and bright for reading. I've watched a number of videos, they look fine both visually, and audio. | don't know the frame rate and the other technical specs off the top of my head, but | am well pleased with the display and the audio performance. | have run some apps from the App Store and it runs fine for those. I'm not a big app person, | primarily use it for email, web browsing, and to stream video and music and I've been very happy with it, and have had no issues. I've paired it with a small Bluetooth speaker and it worked fine with no issues also.
Wr We We W Great purchase. Works fast and has all the applications ... Wok We We Great unit! By Thinking Independently on April 12, 2018 Style: Tablet Verified Purchase This is a great tablet! Setup was super easy, | had it going in short order. I've been using it a lot with no issues. Battery life is great, lasts a few hours with constant use. | don't know how long the battery would last in standby mode because | haven't left it unused for more than a few hours. So much more reliable so far than my laptop which was more expensive. Great purchase. Works fast and has all the applications | might need for general work, school and play. The Samsung Galaxy Tab E 16 GB WiFi works fine for me, it does everything | want it to do. I'm happy with my purchase and would buy it again. 5 people found this helpful Helpful | | Not Helpful | > Comment Report abuse | Helpful | | Not Helpfut > Comment Report abuse The screen is nice and bright for reading. I've watched a number of videos, they look fine both visually, and audio. | don't know the frame rate and the other technical specs off the top of my head, but | am well pleased with the display and the audio performance. | have run some apps from the App Store and it runs fine for those. I'm not a big app person, | primarily use it for email, web browsing, and to stream video and music and I've been very happy with it, and have had no issues. I've paired it with a small Bluetooth speaker and it worked fine with no issues also.
[]
GEM-SciDuet-train-78#paper-1203#slide-1
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-1
Help user write reviews in an easier way
Expand and rewrite phrases Estimate reactions and provide suggestions
Expand and rewrite phrases Estimate reactions and provide suggestions
[]
GEM-SciDuet-train-78#paper-1203#slide-2
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-2
Incorporate information and knowledge
User and item attribute Dong et al. EACL 2017. Learning to Generate Product Reviews from Tang et al. Arxiv 2016. Context-aware Natural Language Generation with Recurrent Neural Networks. Short phrases (user input) Service vendor seller supplier reply refund Price price value overall dependable reliable Screen screen touchscreen browse display scrolling Interaction Case case cover briefcase portfolio A1 AK A1 AK Drive drive disk copying copied fat32 U1 I1 Table 1 Representative words of aspects
User and item attribute Dong et al. EACL 2017. Learning to Generate Product Reviews from Tang et al. Arxiv 2016. Context-aware Natural Language Generation with Recurrent Neural Networks. Short phrases (user input) Service vendor seller supplier reply refund Price price value overall dependable reliable Screen screen touchscreen browse display scrolling Interaction Case case cover briefcase portfolio A1 AK A1 AK Drive drive disk copying copied fat32 U1 I1 Table 1 Representative words of aspects
[]
GEM-SciDuet-train-78#paper-1203#slide-3
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-3
Proposed method
Sequence attention Attribute attention Aspect attention Attribute latent factor Aspect-aware factor Aspect preference score Embedding layers easy to use Sequence Encoder Attribute Encoder Aspect Encoder A1 A2 Ak <str> the display is beautiful and easy to Pv(display) + Pdisplay in Ak(Ak) Pw(display) Projection layer Aspect bias
Sequence attention Attribute attention Aspect attention Attribute latent factor Aspect-aware factor Aspect preference score Embedding layers easy to use Sequence Encoder Attribute Encoder Aspect Encoder A1 A2 Ak <str> the display is beautiful and easy to Pv(display) + Pdisplay in Ak(Ak) Pw(display) Projection layer Aspect bias
[]
GEM-SciDuet-train-78#paper-1203#slide-4
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-4
Experiment setting
Vocabulary of 30,000 tokens Much sparser than previous work Use teacher-forcing and masked cross-entropy loss
Vocabulary of 30,000 tokens Much sparser than previous work Use teacher-forcing and masked cross-entropy loss
[]
GEM-SciDuet-train-78#paper-1203#slide-5
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-5
Automatic evluation metrics
Table 2 Comparison of different algorithms
Table 2 Comparison of different algorithms
[]
GEM-SciDuet-train-78#paper-1203#slide-6
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-6
Examples of generated review
Summary easy to use and nice standard apps Real review the display is beautiful and the tablet i s very easy to use. it comes with some really nice standard apps . Attr2Seq i bought this for my wife s new ipad air . it fits perfectly and looks great . the only thing i do nt like is that the cover is a little too small for the ipad air . ExpansionNet i love this tablet . it is fast and easy to use . i have no complaints . i would recommend this tablet to anyone . +title i love this tablet . it is fast and easy to use . i have a galaxy tab 2 and i love it . +attribute & aspect i love this tablet . it is easy to use and the screen is very responsive . i love the fact that it has a micro sd slot . i have not tried the tablet app yet b ut i do nt have any problems with it . i am very happy with this tablet .
Summary easy to use and nice standard apps Real review the display is beautiful and the tablet i s very easy to use. it comes with some really nice standard apps . Attr2Seq i bought this for my wife s new ipad air . it fits perfectly and looks great . the only thing i do nt like is that the cover is a little too small for the ipad air . ExpansionNet i love this tablet . it is fast and easy to use . i have no complaints . i would recommend this tablet to anyone . +title i love this tablet . it is fast and easy to use . i have a galaxy tab 2 and i love it . +attribute & aspect i love this tablet . it is easy to use and the screen is very responsive . i love the fact that it has a micro sd slot . i have not tried the tablet app yet b ut i do nt have any problems with it . i am very happy with this tablet .
[]
GEM-SciDuet-train-78#paper-1203#slide-7
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-7
Broader aspect coverage in generation
# aspect plus one, if the review covers the representative words from that aspect Our model covers more real reviews aspects # aspects in generated review also covered in real review
# aspect plus one, if the review covers the representative words from that aspect Our model covers more real reviews aspects # aspects in generated review also covered in real review
[]
GEM-SciDuet-train-78#paper-1203#slide-8
1203
Personalized Review Generation by Expanding Phrases and Attending on Aspect-Aware Representations
In this paper, we focus on the problem of building assistive systems that can help users to write reviews. We cast this problem using an encoder-decoder framework that generates personalized reviews by expanding short phrases (e.g. review summaries, product titles) provided as input to the system. We incorporate aspect-level information via an aspect encoder that learns 'aspect-aware' user and item representations. An attention fusion layer is applied to control generation by attending on the outputs of multiple encoders. Experimental results show that our model is capable of generating coherent and diverse reviews that expand the contents of input phrases. In addition, the learned aspectaware representations discover those aspects that users are more inclined to discuss and bias the generated text toward their personalized aspect preferences.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108 ], "paper_content_text": [ "Introduction Contextual, or 'data-to-text' natural language generation is one of the core tasks in natural language processing and has a considerable impact on various fields (Gatt and Krahmer, 2017) .", "Within the field of recommender systems, a promising application is to estimate (or generate) personalized reviews that a user would write about a product, i.e., to discover their nuanced opinions about each of its individual aspects.", "A successful model could work (for instance) as (a) a highly-nuanced recommender system that tells users their likely reaction to a product in the form of text fragments; (b) a writing tool that helps users 'brainstorm' the review-writing process; or (c) a querying system that facilitates personalized natural lan-guage queries (i.e., to find items about which a user would be most likely to write a particular phrase).", "Some recent works have explored the review generation task and shown success in generating cohesive reviews (Dong et al., 2017; Ni et al., 2017; Zang and Wan, 2017) .", "Most of these works treat the user and item identity as input; we seek a system with more nuance and more precision by allowing users to 'guide' the model via short phrases, or auxiliary data such as item specifications.", "For example, a review writing assistant might allow users to write short phrases and expand these key points into a plausible review.", "Review text has been widely studied in traditional tasks such as aspect extraction (Mukherjee and Liu, 2012; He et al., 2017) , extraction of sentiment lexicons (Zhang et al., 2014) , and aspectaware sentiment analysis (Wang et al., 2016; McAuley et al., 2012) .", "These works are related to review generation since they can provide prior knowledge to supervise the generative process.", "We are interested in exploring how such knowledge (e.g.", "extracted aspects) can be used in the review generation task.", "In this paper, we focus on designing a review generation model that is able to leverage both user and item information as well as auxiliary, textual input and aspect-aware knowledge.", "Specifically, we study the task of expanding short phrases into complete, coherent reviews that accurately reflect the opinions and knowledge learned from those phrases.", "These short phrases could include snippets provided by the user, or manifest aspects about the items themselves (e.g.", "brand words, technical specifications, etc.).", "We propose an encoderdecoder framework that takes into consideration three encoders (a sequence encoder, an attribute encoder, and an aspect encoder), and one decoder.", "The sequence encoder uses a gated recurrent unit 0 0 0 … 1 0 0 1 0 … 0 0 (GRU) network to encode text information; the attribute encoder learns a latent representation of user and item identity; finally, the aspect encoder finds an aspect-aware representation of users and items, which reflects user-aspect preferences and item-aspect relationships.", "The aspect-aware representation is helpful to discover what each user is likely to discuss about each item.", "Finally, the output of these encoders is passed to the sequence decoder with an attention fusion layer.", "The decoder attends on the encoded information and biases the model to generate words that are consistent with the input phrases and words belonging to the most relevant aspects.", "Related Work Review generation belongs to a large body of work on data-to-text natural language generation (Gatt and Krahmer, 2017) , which has applications including summarization (See et al., 2017) , image captioning (Vinyals et al., 2015) , and dialogue response generation (Xing et al., 2017; Ghosh et al., 2017) , among others.", "Among these, review generation is characterized by the need to generate long sequences and estimate high-order interactions between users and items.", "Several approaches have been recently proposed to tackle these problems.", "Dong et al.", "(2017) proposed an attribute-to-sequence (Attr2Seq) method to encode user and item identities as well as rating information with a multi-layer perceptron and a decoder then generates reviews conditioned on this information.", "They also used an attention mechanism to strengthen the alignment between output and input attributes.", "Ni et al.", "(2017) trained a collaborative-filtering generative concatenative network to jointly learn the tasks of review generation and item recommendation.", "Zang and Wan (2017) proposed a hierarchical structure to generate long reviews; they assume each sentence is associated with an aspect score, and learn the attention between aspect scores and sentences during training.", "Our approach differs from these mainly in our goal of incorporating auxiliary textual information (short phrases, product specifications, etc.)", "into the generative process, which facilitates the generation of higher-fidelity reviews.", "Another line of work related to review generation is aspect extraction and opinion mining (Park et al., 2015; Qiu et al., 2017; He et al., 2017; Chen et al., 2014) .", "In this paper, we argue that the extra aspect (opinion) information extracted using these previous works can effectively improve the quality of generated reviews.", "We propose a simple but effective way to combine aspect information into the generative model.", "Approach We describe the review generation task as follows.", "Given a user u, item i, several short phrases {d 1 , d 2 , ..., d M }, and a group of extracted aspects {A 1 , A 2 , ..., A k }, our goal is to generate a review (w 1 , w 2 , ..., w T ) that maximizes the probability P (w 1:T |u, i, d 1:M ).", "To solve this task, we propose a method called ExpansionNet which contains two parts: 1) three encoders to leverage the input phrases and aspect information; and 2) a decoder with an attention fusion layer to generate sequences and align the generation with the input sources.", "The model structure is shown in Figure 1 .", "Sequence encoder, attribute encoder and aspect encoder Our sequence encoder is a two-layer bi-directional GRU, as is commonly used in sequence-tosequence (Seq2Seq) models .", "Input phrases first pass a word embedding layer, then go through the GRU one-by-one and finally yield a sequence of hidden states {e 1 , e 2 ..., e L }.", "In the case of multiple phrases, these share the same sequence encoder and have different lengths L. To simplify notation, we only consider one input phrase in this section.", "The attribute encoder and aspect encoder both consist of two embedding layers and a projection layer.", "For the attribute encoder, we define two general embedding layers E u ∈ R |U |×m and E i ∈ R |I|×m to obtain the attribute latent factors γ u and γ i ; for the aspect encoder, we use two aspect-aware embedding layers E u ∈ R |U |×k and E i ∈ R |I|×k to obtain aspect-aware latent factors β u and β i .", "Here |U|, |I|, m and k are the number of users, number of items, the dimension of attributes, and the number of aspects, respectively.", "After the embedding layers, the attribute and aspect-aware latent factors are concatenated and fed into a projection layer with tanh activation.", "The outputs are calculated as: γ u = E u (u), γ i = E i (i) (1) β u = E u (u), β i = E i (i) (2) u = tanh(W u [γ u ; γ i ] + b u ) (3) v = tanh(W v [β u ; β i ] + b v ) (4) where W u ∈ R n×2m , b u ∈ R n , W v ∈ R n×2k , b v ∈ R n are learnable parameters and n is the dimensionality of the hidden units in the decoder.", "Decoder with attention fusion layer The decoder is a two-layer GRU that predicts the target words given the start token.", "The hidden state of the decoder is initialized using the sum of the three encoders' outputs.", "The hidden state at time-step t is updated via the GRU unit based on the previous hidden state and the input word.", "Specifically: h 0 = e L + u + v (5) h t = GRU(w t , h t−1 ), (6) where h 0 ∈ R n is the decoder's initial hidden state and h t ∈ R n is the hidden state at time-step t. To fully exploit the encoder-side information, we apply an attention fusion layer to summarize the output of each encoder and jointly determine the final word distribution.", "For the sequence encoder, the attention vector is defined as in many other applications Luong et al., 2015) : a 1 t = L j=1 α 1 tj e j (7) α 1 tj = exp(tanh(v 1 α (W 1 α [e j ; h t ] + b 1 α )))/Z, (8) where a 1 t ∈ R n is the attention vector on the sequence encoder at time-step t, α 1 tj is the attention score over the encoder hidden state e j and decoder hidden state h t , and Z is a normalization term.", "For the attribute encoder, the attention vector is calculated as: a 2 t = j∈u,i α 2 tj γ j (9) α 2 tj = exp(tanh(v 2 α (W 2 α [γ j ; h t ] + b 2 α )))/Z, (10) where a 2 t ∈ R n is the attention vector on the attribute encoder, and α 2 tj is the attention score between the attribute latent factor γ j and decoder hidden state h t .", "Inspired by the copy mechanism (Gu et al., 2016; See et al., 2017) , we design an attention vector that estimates the probability that each aspect will be discussed in the next time-step: s ui = W s [β u ; β i ] + b s (11) a 3 t = tanh(W 3 α [s ui ; e t ; h t ] + b 3 α ), (12) where s ui ∈ R k is the aspect importance considering the interaction between u and i, e t is the decoder input after embedding layer at time-step t, and a 3 t ∈ R k is a probability vector to bias each aspect at time-step t. Finally, the first two attention vectors are concatenated with the decoder hidden state at time-step t and projected to obtain the output word distribution P v .", "The attention scores from the aspect encoder are then directly added to the aspect words in the final word distribution.", "The output probability for word w at time-step t is given by: where w t is the target word at time-step t, a 3 t [k] is the probability that aspect k will be discussed at time-step t, A k represents all words belonging to aspect k and 1 wt∈A k is a binary variable indicating whether w t belongs to aspect k. During inference, we use greedy decoding by choosing the word with maximum probability, denoted as y t = argmax wt softmax(P (w t )).", "Decoding finishes when an end token is encountered.", "Experiments We consider a real world dataset from Amazon Electronics (McAuley et al., 2015) to evaluate our model.", "We convert all text into lowercase, add start and end tokens to each review, and perform tokenization using NLTK.", "1 We discard reviews with length greater than 100 tokens and consider a vocabulary of 30,000 tokens.", "After preprocessing, the dataset contains 182,850 users, 59,043 items, and 992,172 reviews (sparsity 99.993%), which is much sparser than the datasets used in previous works (Dong et al., 2017; Ni et al., 2017) .", "On average, each review contains 49.32 tokens as well as a short-text summary of 4.52 tokens.", "In our experiments, the basic ExpansionNet uses these summaries as input phrases.", "We split the dataset into training (80%), validation (10%) and test sets (10%).", "All results are reported on the test set.", "Aspect Extraction We use the method 2 in (He et al., 2017) to extract 15 aspects and consider the top 100 words from each aspect.", "Table 2 shows 10 inferred aspects and representative words (inferred aspects are manually labeled).", "ExpansionNet calculates an attention score based on the user and item aspect-aware representation, then determines how much these representative words are biased in the output word distribution.", "1 https://www.nltk.org/ 2 https://github.com/ruidan/ Unsupervised-Aspect-Extraction Experiment Details We use PyTorch 3 to implement our model.", "4 Parameter settings are shown in Table 1 .", "For the attribute encoder and aspect encoder, we set the dimensionality to 64 and 15 respectively.", "For both the sequence encoder and decoder, we use a 2layer GRU with hidden size 512.", "We also add dropout layers before and after the GRUs.", "The dropout rate is set to 0.1.", "During training, the input sequences of the same source (e.g.", "review, summary) inside each batch are padded to the same length.", "Performance Evaluation We evaluate the model on six automatic metrics (Table 3) : Perplexity, BLEU-1/BLEU-4, ROUGE-L and Distinct-1/2 (percentage of distinct unigrams and bi-grams) .", "We compare User/Item user A3G831BTCLWGVQ and item B007M50PTM Review summary \"easy to use and nice standard apps\" Item title \"samsung galaxy tab 2 (10.1-Inch, wi-fi) 2012 model\" Real review \"the display is beautiful and the tablet is very easy to use.", "it comes with some really nice standard apps.\"", "AttrsSeq \"i bought this for my wife 's new ipad air .", "it fits perfectly and looks great .", "the only thing i do n't like is that the cover is a little too small for the ipad air . \"", "ExpansionNet \"i love this tablet .", "it is fast and easy to use .", "i have no complaints .", "i would recommend this tablet to anyone .\"", "+title \"i love this tablet .", "it is fast and easy to use .", "i have a galaxy tab 2 and i love it .\"", "+attribute & aspect \"i love this tablet .", "it is easy to use and the screen is very responsive .", "i love the fact that it has a micro sd slot .", "i have not tried the tablet app yet but i do n't have any problems with it .", "i am very happy with this tablet .\"", "Figure 2 : Examples of a real review and reviews generated by different models given a user, item, review summary, and item title.", "Highlights added for emphasis.", "against three baselines: Rand (randomly choose a review from the training set), GRU-LM (the GRU decoder works alone as a language model) and a state-of-the-art model Attr2Seq that only considers user and item attribute (Dong et al., 2017) .", "ExpansionNet (with summary, item title, attribute and aspect as input) achieves significant improvements over Attr2Seq on all metrics.", "As we add more input information, the model continues to obtain better results, except for the ROUGE-L metric.", "This proves that our model can effectively learn from short input phrases and aspect information and improve the correctness and diversity of generated results.", "Figure 2 presents a sample generation result.", "ExpansionNet captures fine-grained item information (e.g.", "that the item is a tablet), which Attr2Seq fails to recognize.", "Moreover, given a phrase like \"easy to use\" in the summary, ExpansionNet generates reviews containing the same text.", "This demonstrates the possibility of using our model in an assistive review generation scenario.", "Finally, given extra aspect information, the model successfully estimates that the screen would be an important aspect (i.e., for the current user and item); it generates phrases such as \"screen is very respon- sive\" about the aspect \"screen\" which is also covered in the real (ground-truth) review (\"display is beautiful\").", "We are also interested in seeing how the aspectaware representation can find related aspects and bias the generation to discuss more about those aspects.", "We analyze the average number of aspects in real and generated reviews and show on average how many aspects in real reviews are covered in generated reviews.", "We consider a review as covering an aspect if any of the aspect's representative words exists in the review.", "As shown in Table 4 , Attr2Seq tends to cover more aspects in generation, many of which are not discussed in real reviews.", "On the other hand, ExpansionNet better captures the distribution of aspects that are discussed in real reviews." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "4.1", "4.2", "4.3" ], "paper_header_content": [ "Introduction", "Related Work", "Approach", "Sequence encoder, attribute encoder and aspect encoder", "Decoder with attention fusion layer", "Experiments", "Aspect Extraction", "Experiment Details", "Performance Evaluation" ] }
GEM-SciDuet-train-78#paper-1203#slide-8
Conclusion and future work
Build ExpansionNet to incorporate short phrases, product title and aspect preference in review generation Show aspect embedding and aspect extraction can be used in personalized text generation Combine text expansion task with text rewriting techniques Generate longer text such as product recommendation articles
Build ExpansionNet to incorporate short phrases, product title and aspect preference in review generation Show aspect embedding and aspect extraction can be used in personalized text generation Combine text expansion task with text rewriting techniques Generate longer text such as product recommendation articles
[]
GEM-SciDuet-train-79#paper-1205#slide-0
1205
Examining Temporality in Document Classification
Many corpora span broad periods of time. Language processing models trained during one time period may not work well in future time periods, and the best model may depend on specific times of year (e.g., people might describe hotels differently in reviews during the winter versus the summer). This study investigates how document classifiers trained on documents from certain time intervals perform on documents from other time intervals, considering both seasonal intervals (intervals that repeat across years, e.g., winter) and non-seasonal intervals (e.g., specific years). We show experimentally that classification performance varies over time, and that performance can be improved by using a standard domain adaptation approach to adjust for changes in time.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Language, and therefore data derived from language, changes over time (Ullmann, 1962) .", "Word senses can shift over long periods of time (Wilkins, 1993; Wijaya and Yeniterzi, 2011; Hamilton et al., 2016) , and written language can change rapidly in online platforms (Eisenstein et al., 2014; Goel et al., 2016) .", "However, little is known about how shifts in text over time affect the performance of language processing systems.", "This paper focuses on a standard text processing task, document classification, to provide insight into how classification performance varies with time.", "We consider both long-term variations in text over time and seasonal variations which change throughout a year but repeat across years.", "Our empirical study considers corpora contain-ing formal text spanning decades as well as usergenerated content spanning only a few years.", "After describing the datasets and experiment design, this paper has two main sections, respectively addressing the following research questions: 1.", "In what ways does document classification depend on the timestamps of the documents?", "2.", "Can document classifiers be adapted to perform better in time-varying corpora?", "To address question 1, we train and test on data from different time periods, to understand how performance varies with time.", "To address question 2, we apply a domain adaptation approach, treating time intervals as domains.", "We show that in most cases this approach can lead to improvements in classification performance, even on future time intervals.", "Related Work Time is implicitly embedded in the classification process: classifiers are often built to be applied to future data that doesn't yet exist, and performance on held-out data is measured to estimate performance on future data whose distribution may have changed.", "Methods exist to adjust for changes in the data distribution (covariate shift) (Shimodaira, 2000; Bickel et al., 2009 ), but time is not typically incorporated into such methods explicitly.", "One line of work that explicitly studies the relationship between time and the distribution of data is work on classifying the time period in which a document was written (document dating) (Kanhabua and Nørvåg, 2008; Chambers, 2012; Kotsakos et al., 2014 ).", "However, this task is directed differently from our work: predicting timestamps given documents, rather than predicting information about documents given timestamps.", "Dataset Time intervals (non-seasonal) Time intervals (seasonal) Size Reviews (music) 1997-99, 2000-02, 2003-05, 2006-08, 2009-11, 2012-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 653K Reviews (hotels) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 78.6K Reviews (restaurants) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 1.16M News (economy) 1950-70, 1971-85, 1986-2000, 2001-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 6.29K Politics (platforms) 1948 (platforms) -56, 1960 (platforms) -68, 1972 (platforms) -80, 1984 (platforms) -92, 1996 (platforms) -2004 (platforms) , 2008 (platforms) -16 n/a 35.8K Twitter (vaccines) 2013 (platforms) , 2014 (platforms) , 2015 (platforms) , 2016 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 9.83K Table 1 : Descriptions of corpora spanning multiple time intervals.", "Size is the number of documents.", "Datasets and Experimental Setup Our study experiments with six corpora: • Reviews: Three corpora containing reviews labeled with sentiment: music reviews from Amazon (He and McAuley, 2016) , and hotel reviews and restaurant reviews from Yelp.", "1 We discarded reviews that had fewer than 10 tokens or a helpfulness/usefulness score of zero.", "The reviews with neutral scores were removed.", "• Twitter: Tweets labeled with whether they indicate that the user received an influenza vaccination (i.e., a flu shot) (Huang et al., 2017) .", "Our experiments require documents to be grouped into time intervals.", "Table 1 shows the intervals for each corpus.", "Documents that fall outside of these time intervals were removed.", "We grouped documents into two types of intervals: • Seasonal: Time intervals within a year (e.g., January through March) that may be repeated across years.", "• Non-seasonal: Time intervals that do not repeat (e.g., 1997-1999) .", "For each dataset, we performed binary classification, implemented in sklearn (Pedregosa et al., 2011) .", "We built logistic regression classifiers with TF-IDF weighted n-gram features (n ∈ {1, 2, 3}), removing features that appeared in less than 2 documents.", "Except when otherwise specified, we held out a random 10% of documents as validation data for each dataset.", "We used Elastic Net (combined 1 and 2 ) regularization (Zou and Hastie, 2005) , and tuned the regularization parameters to maximize performance on the validation data.", "We evaluated the performance using weighted F1 scores.", "How Does Classification Performance Vary with Time?", "We first conduct an analysis of how classifier performance depends on the time intervals in which it is trained and applied.", "For each corpus, we train the classifier on each time interval and test on each time interval.", "We downsampled the training data within each time interval to match the number of documents in the smallest interval, so that differences in performance are not due to the size of the training data.", "In all experiments, we train a classifier on a partition of 80% of the documents in the time interval, and repeat this five times on different partitions, averaging the five F1 scores to produce the final estimate.", "When training and testing on the same interval, we test on the held-out 20% of documents in that interval (standard cross-validation).", "When testing on different time intervals, we test on all documents, since they are all held-out from the training interval; however, we still train on five subsets of 80% of documents, so that the training data is identical across all experiments.", "Finally, to understand why performance varies, we also qualitatively examined how the distribution of content changes across time intervals.", "To measure the distribution of content, we trained a topic model with 20 topics using gensim (Řehůřek and Sojka, 2010) with default parameters.", "We associated each document with one topic (the most probable topic in the document), and then calculated the proportion of each topic within a time period as the proportion of documents in that time period assigned to that topic.", "We can then visualize the extent to which the distribution of 20 topics varies by time.", "Seasonal Variability The top row of Figure 1 shows the test scores from training and testing on each pair of seasonal time intervals for four of the datasets.", "We observe very strong seasonal variations in the economic news corpus, with a drop in F1 score on the order of 10 when there is a mismatch in the season between training and testing.", "There is a similar, but weaker, effect on performance in the music reviews from Amazon and the vaccine tweets.", "There was virtually no difference in performance in any of the pairs in both review corpora from Yelp (restaurants, not pictured, and hotels).", "To help understand why the performance varies, Figure 2 (left) shows the distribution of topics in each seasonal interval for two corpora: Amazon music reviews and Twitter.", "We observe very little variation in the topic distribution across seasons in the Amazon corpus, but some variation in the Twitter corpus, which may explain the large performance differences when testing on held-out seasons in the Twitter data as compared to the Amazon corpus.", "For space, we do not show the descriptions of the topics, but instead only the shape of the distributions to show the degree of variability.", "We did qualitatively examine the differences in word features across the time periods, but had difficulty interpreting the observations and were unable to draw clear conclusions.", "Thus, characterizing the ways in which content distributions vary over time, and why this affects performance, is still an open question.", "Non-seasonal Variability The bottom row of Figure 1 shows the test scores from training and testing on each pair of nonseasonal time intervals.", "A strong pattern emerges in the political parties corpus: F1 scores can drop by as much as 40 points when testing on different time intervals.", "This is perhaps unsurprising, as this collection spans decades, and US party positions have substantially changed over time.", "The performance declines more when testing on time intervals that are further away in time from the training interval, suggesting that changes in party platforms shift gradually over time.", "In contrast, while there was a performance drop when testing outside the training interval in the economic news corpus, the drop was not gradual.", "In the Twitter dataset (not pictured), F1 dropped by an average of 4.9 points outside the training interval.", "We observe an intriguing non-seasonal pattern that is consistent in both of the review corpora from Yelp, but not in the music review corpus from Amazon (not pictured), which is that the classification performance fairly consistently increases over time.", "Since we sampled the dataset so that the time intervals have the same number of reviews, this suggests something else changed over time about the way reviews are written that makes the sentiment easier to detect.", "The right side of Figure 2 shows the topic distribution in the Amazon and Twitter datasets across non-seasonal intervals.", "We observe higher levels of variability across time in the non-seasonal intervals as compared to the seasonal intervals.", "Discussion Overall, it is clear that classifiers generally perform best when applied to the same time interval they were trained.", "Performance diminishes when applied to different time intervals, although different corpora exhibit differ patterns in the way in which the performance diminishes.", "This kind of analysis can be applied to any corpus and could provide insights into characteristics of the corpus that may be helpful when designing a classifier.", "Making Classification Robust to Temporality We now consider how to improve classifiers when working with datasets that span different time intervals.", "We propose to treat this as a domain adaptation problem.", "In domain adaptation, any partition of data that is expected to have a different distribution of features can be treated as a domain (Joshi et al., 2013) .", "Traditionally, domain adaptation is used to adapt models to a common task across rather different sets of data, e.g., a sentiment classifier for different types of products (Blitzer et al., 2007) .", "Recent work has also applied domain adaptation to adjust for potentially more subtle differences in data, such as adapting for differences in the demographics of authors (Volkova et al., 2013; Lynn et al., 2017) .", "We follow the same approach, treating time intervals as domains.", "In our experiments, we use the feature augmentation approach of Daumé III (2007) to perform domain adaptation.", "Each feature is duplicated to have a specific version of the feature for every domain, as well as a domain-independent version of the feature.", "In each instance, the domainindependent feature and the domain-specific feature for that instance's domain have the same feature value, while the value is zeroed out for the domain-specific features for the other domains.", "Data (Seasonal) Baseline Adaptation Reviews (music) .901 .919 Reviews (hotels) .867 .881 Reviews (restaurants) .874 .898 News (economy) .782 .782 Twitter (vaccines) .881 .880 Table 2 : F1 scores when treating each seasonal time interval as a domain and applying domain adaptation compared to using no adaptation.", "This is equivalent to a model where the feature weights are domain specific but share a Gaussian prior across domains (Finkel and Manning, 2009 ).", "This approach is widely used due to its simplicity, and derivatives of this approach have been used in similar work (e.g., (Lynn et al., 2017) ).", "Following Finkel and Manning (2009) , we separately adjust the regularization strength for the domain-independent feature weights and the domain-specific feature weights.", "Seasonal Adaptation We first examine classification performance on the datasets when grouping the seasonal time intervals (January-March, April-June, July-August, September-December) as domains and applying the feature augmentation approach for domain adaptation.", "As a baseline comparison, we apply the same classifier, but without domain adaptation.", "Results are shown in Table 2 .", "We see that applying domain adaptation provides a small boost in three of the datasets, and has no effect on two of the datasets.", "If this pattern holds in other corpora, then this suggests that it does not hurt performance to apply domain adaptation across different times of year, and in some cases can lead to a small performance boost.", "Data (Non-seasonal) Baseline Adaptation Adapt.+seasons Reviews (music) .895 .924 .910 Reviews (hotels) .886 .892 .920 Reviews (restaurants) .", "831 .879 .889 News (economy) .", "763 .780 .859 Politics (platforms) .661 .665 n/a Twitter (vaccines) .910 .903 .920 Table 3 : F1 scores when testing on the final time interval after training on all previous intervals.", "Non-seasonal Adaptation We now consider the non-seasonal time intervals (spans of years).", "In particular, we consider the scenario when one wants to apply a classifier trained on older data to future data.", "This requires a modification to the domain adaptation approach, because future data includes domains that did not exist in the training data, and thus we cannot learn domain-specific feature weights.", "To solve this, we train in the usual way, but when testing on future data, we only include the domain-independent features.", "The intuition is that the domain-independent parameters should be applicable to all domains, and so using only these features should lead to better generalizability to new domains.", "We test this hypothesis by training the classifiers on all but the last time interval, and testing on the final interval.", "For hyperparameter tuning, we used the final time interval of the training data (i.e., the penultimate interval) as the validation set.", "The intuition is that the penultimate interval is the closest to the test data and thus is expected to be most similar to it.", "Results are shown in the first three columns of Table 3 .", "We see that this approach leads to a small performance boost in all cases except the Twitter dataset.", "This means that this simple feature augmentation approach has the potential to make classifiers more robust to future changes in data.", "How to apply the feature augmentation technique to unseen domains is not well understood.", "By removing the domain-specific features, as we did here, the prediction model has changed, and so its behavior may be hard to predict.", "Nonetheless, this appears to be a successful approach.", "Adding Seasonal Features We also experimented with including the seasonal features when performing non-seasonal adaptation.", "In this setting, we train the models with two domain-specific features in addition to the domain-independent features: one for the season, and one for the non-seasonal interval.", "As above, we remove the non-seasonal features at test time; however, we retain the season-specific features in addition to the domain-independent features, as they can be reused in future years.", "The results of this approach are shown in the last column of Table 3 .", "We find that combining seasonal and non-seasonal features together leads to an additional performance gain in most cases.", "Conclusion Our experiments suggest that time can substantially affect the performance of document classification, and practitioners should be cognizant of this variable when developing classifiers.", "A simple analysis comparing pairs of time intervals can provide insights into how performance varies with time, which could be a good practice to do when initially working with a corpus.", "Our experiments also suggest that simple domain adaptation techniques can help account for this variation.", "4 We make two practical recommendations following the insights from this work.", "First, evaluation will be most accurate if the test data is as similar as possible to whatever future data the classifier will be applied to, and one way to achieve this is to select test data from the chronological end of the corpus, rather than randomly sampling data without regard to time.", "Second, we observed that performance on future data tends to increase when hyperparameter tuning is conducted on later data; thus, we also recommend sampling validation data from the chronological end of the corpus." ] }
{ "paper_header_number": [ "1", "1.1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.2.1", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets and Experimental Setup", "How Does Classification Performance", "Seasonal Variability", "Non-seasonal Variability", "Discussion", "Making Classification Robust to Temporality", "Seasonal Adaptation", "Non-seasonal Adaptation", "Adding Seasonal Features", "Conclusion" ] }
GEM-SciDuet-train-79#paper-1205#slide-0
Why is my classifier getting worse
The data distribution has changed Is there anything systematic about how it changes? Is there anything we can do to adapt to temporal changes? Subtle shifts in topic distribution
The data distribution has changed Is there anything systematic about how it changes? Is there anything we can do to adapt to temporal changes? Subtle shifts in topic distribution
[]
GEM-SciDuet-train-79#paper-1205#slide-1
1205
Examining Temporality in Document Classification
Many corpora span broad periods of time. Language processing models trained during one time period may not work well in future time periods, and the best model may depend on specific times of year (e.g., people might describe hotels differently in reviews during the winter versus the summer). This study investigates how document classifiers trained on documents from certain time intervals perform on documents from other time intervals, considering both seasonal intervals (intervals that repeat across years, e.g., winter) and non-seasonal intervals (e.g., specific years). We show experimentally that classification performance varies over time, and that performance can be improved by using a standard domain adaptation approach to adjust for changes in time.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Language, and therefore data derived from language, changes over time (Ullmann, 1962) .", "Word senses can shift over long periods of time (Wilkins, 1993; Wijaya and Yeniterzi, 2011; Hamilton et al., 2016) , and written language can change rapidly in online platforms (Eisenstein et al., 2014; Goel et al., 2016) .", "However, little is known about how shifts in text over time affect the performance of language processing systems.", "This paper focuses on a standard text processing task, document classification, to provide insight into how classification performance varies with time.", "We consider both long-term variations in text over time and seasonal variations which change throughout a year but repeat across years.", "Our empirical study considers corpora contain-ing formal text spanning decades as well as usergenerated content spanning only a few years.", "After describing the datasets and experiment design, this paper has two main sections, respectively addressing the following research questions: 1.", "In what ways does document classification depend on the timestamps of the documents?", "2.", "Can document classifiers be adapted to perform better in time-varying corpora?", "To address question 1, we train and test on data from different time periods, to understand how performance varies with time.", "To address question 2, we apply a domain adaptation approach, treating time intervals as domains.", "We show that in most cases this approach can lead to improvements in classification performance, even on future time intervals.", "Related Work Time is implicitly embedded in the classification process: classifiers are often built to be applied to future data that doesn't yet exist, and performance on held-out data is measured to estimate performance on future data whose distribution may have changed.", "Methods exist to adjust for changes in the data distribution (covariate shift) (Shimodaira, 2000; Bickel et al., 2009 ), but time is not typically incorporated into such methods explicitly.", "One line of work that explicitly studies the relationship between time and the distribution of data is work on classifying the time period in which a document was written (document dating) (Kanhabua and Nørvåg, 2008; Chambers, 2012; Kotsakos et al., 2014 ).", "However, this task is directed differently from our work: predicting timestamps given documents, rather than predicting information about documents given timestamps.", "Dataset Time intervals (non-seasonal) Time intervals (seasonal) Size Reviews (music) 1997-99, 2000-02, 2003-05, 2006-08, 2009-11, 2012-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 653K Reviews (hotels) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 78.6K Reviews (restaurants) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 1.16M News (economy) 1950-70, 1971-85, 1986-2000, 2001-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 6.29K Politics (platforms) 1948 (platforms) -56, 1960 (platforms) -68, 1972 (platforms) -80, 1984 (platforms) -92, 1996 (platforms) -2004 (platforms) , 2008 (platforms) -16 n/a 35.8K Twitter (vaccines) 2013 (platforms) , 2014 (platforms) , 2015 (platforms) , 2016 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 9.83K Table 1 : Descriptions of corpora spanning multiple time intervals.", "Size is the number of documents.", "Datasets and Experimental Setup Our study experiments with six corpora: • Reviews: Three corpora containing reviews labeled with sentiment: music reviews from Amazon (He and McAuley, 2016) , and hotel reviews and restaurant reviews from Yelp.", "1 We discarded reviews that had fewer than 10 tokens or a helpfulness/usefulness score of zero.", "The reviews with neutral scores were removed.", "• Twitter: Tweets labeled with whether they indicate that the user received an influenza vaccination (i.e., a flu shot) (Huang et al., 2017) .", "Our experiments require documents to be grouped into time intervals.", "Table 1 shows the intervals for each corpus.", "Documents that fall outside of these time intervals were removed.", "We grouped documents into two types of intervals: • Seasonal: Time intervals within a year (e.g., January through March) that may be repeated across years.", "• Non-seasonal: Time intervals that do not repeat (e.g., 1997-1999) .", "For each dataset, we performed binary classification, implemented in sklearn (Pedregosa et al., 2011) .", "We built logistic regression classifiers with TF-IDF weighted n-gram features (n ∈ {1, 2, 3}), removing features that appeared in less than 2 documents.", "Except when otherwise specified, we held out a random 10% of documents as validation data for each dataset.", "We used Elastic Net (combined 1 and 2 ) regularization (Zou and Hastie, 2005) , and tuned the regularization parameters to maximize performance on the validation data.", "We evaluated the performance using weighted F1 scores.", "How Does Classification Performance Vary with Time?", "We first conduct an analysis of how classifier performance depends on the time intervals in which it is trained and applied.", "For each corpus, we train the classifier on each time interval and test on each time interval.", "We downsampled the training data within each time interval to match the number of documents in the smallest interval, so that differences in performance are not due to the size of the training data.", "In all experiments, we train a classifier on a partition of 80% of the documents in the time interval, and repeat this five times on different partitions, averaging the five F1 scores to produce the final estimate.", "When training and testing on the same interval, we test on the held-out 20% of documents in that interval (standard cross-validation).", "When testing on different time intervals, we test on all documents, since they are all held-out from the training interval; however, we still train on five subsets of 80% of documents, so that the training data is identical across all experiments.", "Finally, to understand why performance varies, we also qualitatively examined how the distribution of content changes across time intervals.", "To measure the distribution of content, we trained a topic model with 20 topics using gensim (Řehůřek and Sojka, 2010) with default parameters.", "We associated each document with one topic (the most probable topic in the document), and then calculated the proportion of each topic within a time period as the proportion of documents in that time period assigned to that topic.", "We can then visualize the extent to which the distribution of 20 topics varies by time.", "Seasonal Variability The top row of Figure 1 shows the test scores from training and testing on each pair of seasonal time intervals for four of the datasets.", "We observe very strong seasonal variations in the economic news corpus, with a drop in F1 score on the order of 10 when there is a mismatch in the season between training and testing.", "There is a similar, but weaker, effect on performance in the music reviews from Amazon and the vaccine tweets.", "There was virtually no difference in performance in any of the pairs in both review corpora from Yelp (restaurants, not pictured, and hotels).", "To help understand why the performance varies, Figure 2 (left) shows the distribution of topics in each seasonal interval for two corpora: Amazon music reviews and Twitter.", "We observe very little variation in the topic distribution across seasons in the Amazon corpus, but some variation in the Twitter corpus, which may explain the large performance differences when testing on held-out seasons in the Twitter data as compared to the Amazon corpus.", "For space, we do not show the descriptions of the topics, but instead only the shape of the distributions to show the degree of variability.", "We did qualitatively examine the differences in word features across the time periods, but had difficulty interpreting the observations and were unable to draw clear conclusions.", "Thus, characterizing the ways in which content distributions vary over time, and why this affects performance, is still an open question.", "Non-seasonal Variability The bottom row of Figure 1 shows the test scores from training and testing on each pair of nonseasonal time intervals.", "A strong pattern emerges in the political parties corpus: F1 scores can drop by as much as 40 points when testing on different time intervals.", "This is perhaps unsurprising, as this collection spans decades, and US party positions have substantially changed over time.", "The performance declines more when testing on time intervals that are further away in time from the training interval, suggesting that changes in party platforms shift gradually over time.", "In contrast, while there was a performance drop when testing outside the training interval in the economic news corpus, the drop was not gradual.", "In the Twitter dataset (not pictured), F1 dropped by an average of 4.9 points outside the training interval.", "We observe an intriguing non-seasonal pattern that is consistent in both of the review corpora from Yelp, but not in the music review corpus from Amazon (not pictured), which is that the classification performance fairly consistently increases over time.", "Since we sampled the dataset so that the time intervals have the same number of reviews, this suggests something else changed over time about the way reviews are written that makes the sentiment easier to detect.", "The right side of Figure 2 shows the topic distribution in the Amazon and Twitter datasets across non-seasonal intervals.", "We observe higher levels of variability across time in the non-seasonal intervals as compared to the seasonal intervals.", "Discussion Overall, it is clear that classifiers generally perform best when applied to the same time interval they were trained.", "Performance diminishes when applied to different time intervals, although different corpora exhibit differ patterns in the way in which the performance diminishes.", "This kind of analysis can be applied to any corpus and could provide insights into characteristics of the corpus that may be helpful when designing a classifier.", "Making Classification Robust to Temporality We now consider how to improve classifiers when working with datasets that span different time intervals.", "We propose to treat this as a domain adaptation problem.", "In domain adaptation, any partition of data that is expected to have a different distribution of features can be treated as a domain (Joshi et al., 2013) .", "Traditionally, domain adaptation is used to adapt models to a common task across rather different sets of data, e.g., a sentiment classifier for different types of products (Blitzer et al., 2007) .", "Recent work has also applied domain adaptation to adjust for potentially more subtle differences in data, such as adapting for differences in the demographics of authors (Volkova et al., 2013; Lynn et al., 2017) .", "We follow the same approach, treating time intervals as domains.", "In our experiments, we use the feature augmentation approach of Daumé III (2007) to perform domain adaptation.", "Each feature is duplicated to have a specific version of the feature for every domain, as well as a domain-independent version of the feature.", "In each instance, the domainindependent feature and the domain-specific feature for that instance's domain have the same feature value, while the value is zeroed out for the domain-specific features for the other domains.", "Data (Seasonal) Baseline Adaptation Reviews (music) .901 .919 Reviews (hotels) .867 .881 Reviews (restaurants) .874 .898 News (economy) .782 .782 Twitter (vaccines) .881 .880 Table 2 : F1 scores when treating each seasonal time interval as a domain and applying domain adaptation compared to using no adaptation.", "This is equivalent to a model where the feature weights are domain specific but share a Gaussian prior across domains (Finkel and Manning, 2009 ).", "This approach is widely used due to its simplicity, and derivatives of this approach have been used in similar work (e.g., (Lynn et al., 2017) ).", "Following Finkel and Manning (2009) , we separately adjust the regularization strength for the domain-independent feature weights and the domain-specific feature weights.", "Seasonal Adaptation We first examine classification performance on the datasets when grouping the seasonal time intervals (January-March, April-June, July-August, September-December) as domains and applying the feature augmentation approach for domain adaptation.", "As a baseline comparison, we apply the same classifier, but without domain adaptation.", "Results are shown in Table 2 .", "We see that applying domain adaptation provides a small boost in three of the datasets, and has no effect on two of the datasets.", "If this pattern holds in other corpora, then this suggests that it does not hurt performance to apply domain adaptation across different times of year, and in some cases can lead to a small performance boost.", "Data (Non-seasonal) Baseline Adaptation Adapt.+seasons Reviews (music) .895 .924 .910 Reviews (hotels) .886 .892 .920 Reviews (restaurants) .", "831 .879 .889 News (economy) .", "763 .780 .859 Politics (platforms) .661 .665 n/a Twitter (vaccines) .910 .903 .920 Table 3 : F1 scores when testing on the final time interval after training on all previous intervals.", "Non-seasonal Adaptation We now consider the non-seasonal time intervals (spans of years).", "In particular, we consider the scenario when one wants to apply a classifier trained on older data to future data.", "This requires a modification to the domain adaptation approach, because future data includes domains that did not exist in the training data, and thus we cannot learn domain-specific feature weights.", "To solve this, we train in the usual way, but when testing on future data, we only include the domain-independent features.", "The intuition is that the domain-independent parameters should be applicable to all domains, and so using only these features should lead to better generalizability to new domains.", "We test this hypothesis by training the classifiers on all but the last time interval, and testing on the final interval.", "For hyperparameter tuning, we used the final time interval of the training data (i.e., the penultimate interval) as the validation set.", "The intuition is that the penultimate interval is the closest to the test data and thus is expected to be most similar to it.", "Results are shown in the first three columns of Table 3 .", "We see that this approach leads to a small performance boost in all cases except the Twitter dataset.", "This means that this simple feature augmentation approach has the potential to make classifiers more robust to future changes in data.", "How to apply the feature augmentation technique to unseen domains is not well understood.", "By removing the domain-specific features, as we did here, the prediction model has changed, and so its behavior may be hard to predict.", "Nonetheless, this appears to be a successful approach.", "Adding Seasonal Features We also experimented with including the seasonal features when performing non-seasonal adaptation.", "In this setting, we train the models with two domain-specific features in addition to the domain-independent features: one for the season, and one for the non-seasonal interval.", "As above, we remove the non-seasonal features at test time; however, we retain the season-specific features in addition to the domain-independent features, as they can be reused in future years.", "The results of this approach are shown in the last column of Table 3 .", "We find that combining seasonal and non-seasonal features together leads to an additional performance gain in most cases.", "Conclusion Our experiments suggest that time can substantially affect the performance of document classification, and practitioners should be cognizant of this variable when developing classifiers.", "A simple analysis comparing pairs of time intervals can provide insights into how performance varies with time, which could be a good practice to do when initially working with a corpus.", "Our experiments also suggest that simple domain adaptation techniques can help account for this variation.", "4 We make two practical recommendations following the insights from this work.", "First, evaluation will be most accurate if the test data is as similar as possible to whatever future data the classifier will be applied to, and one way to achieve this is to select test data from the chronological end of the corpus, rather than randomly sampling data without regard to time.", "Second, we observed that performance on future data tends to increase when hyperparameter tuning is conducted on later data; thus, we also recommend sampling validation data from the chronological end of the corpus." ] }
{ "paper_header_number": [ "1", "1.1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.2.1", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets and Experimental Setup", "How Does Classification Performance", "Seasonal Variability", "Non-seasonal Variability", "Discussion", "Making Classification Robust to Temporality", "Seasonal Adaptation", "Non-seasonal Adaptation", "Adding Seasonal Features", "Conclusion" ] }
GEM-SciDuet-train-79#paper-1205#slide-1
Experiments
Two types of time periods: Logistic regression, n-gram features Six datasets, each grouped into 4-6 time periods
Two types of time periods: Logistic regression, n-gram features Six datasets, each grouped into 4-6 time periods
[]
GEM-SciDuet-train-79#paper-1205#slide-2
1205
Examining Temporality in Document Classification
Many corpora span broad periods of time. Language processing models trained during one time period may not work well in future time periods, and the best model may depend on specific times of year (e.g., people might describe hotels differently in reviews during the winter versus the summer). This study investigates how document classifiers trained on documents from certain time intervals perform on documents from other time intervals, considering both seasonal intervals (intervals that repeat across years, e.g., winter) and non-seasonal intervals (e.g., specific years). We show experimentally that classification performance varies over time, and that performance can be improved by using a standard domain adaptation approach to adjust for changes in time.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Language, and therefore data derived from language, changes over time (Ullmann, 1962) .", "Word senses can shift over long periods of time (Wilkins, 1993; Wijaya and Yeniterzi, 2011; Hamilton et al., 2016) , and written language can change rapidly in online platforms (Eisenstein et al., 2014; Goel et al., 2016) .", "However, little is known about how shifts in text over time affect the performance of language processing systems.", "This paper focuses on a standard text processing task, document classification, to provide insight into how classification performance varies with time.", "We consider both long-term variations in text over time and seasonal variations which change throughout a year but repeat across years.", "Our empirical study considers corpora contain-ing formal text spanning decades as well as usergenerated content spanning only a few years.", "After describing the datasets and experiment design, this paper has two main sections, respectively addressing the following research questions: 1.", "In what ways does document classification depend on the timestamps of the documents?", "2.", "Can document classifiers be adapted to perform better in time-varying corpora?", "To address question 1, we train and test on data from different time periods, to understand how performance varies with time.", "To address question 2, we apply a domain adaptation approach, treating time intervals as domains.", "We show that in most cases this approach can lead to improvements in classification performance, even on future time intervals.", "Related Work Time is implicitly embedded in the classification process: classifiers are often built to be applied to future data that doesn't yet exist, and performance on held-out data is measured to estimate performance on future data whose distribution may have changed.", "Methods exist to adjust for changes in the data distribution (covariate shift) (Shimodaira, 2000; Bickel et al., 2009 ), but time is not typically incorporated into such methods explicitly.", "One line of work that explicitly studies the relationship between time and the distribution of data is work on classifying the time period in which a document was written (document dating) (Kanhabua and Nørvåg, 2008; Chambers, 2012; Kotsakos et al., 2014 ).", "However, this task is directed differently from our work: predicting timestamps given documents, rather than predicting information about documents given timestamps.", "Dataset Time intervals (non-seasonal) Time intervals (seasonal) Size Reviews (music) 1997-99, 2000-02, 2003-05, 2006-08, 2009-11, 2012-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 653K Reviews (hotels) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 78.6K Reviews (restaurants) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 1.16M News (economy) 1950-70, 1971-85, 1986-2000, 2001-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 6.29K Politics (platforms) 1948 (platforms) -56, 1960 (platforms) -68, 1972 (platforms) -80, 1984 (platforms) -92, 1996 (platforms) -2004 (platforms) , 2008 (platforms) -16 n/a 35.8K Twitter (vaccines) 2013 (platforms) , 2014 (platforms) , 2015 (platforms) , 2016 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 9.83K Table 1 : Descriptions of corpora spanning multiple time intervals.", "Size is the number of documents.", "Datasets and Experimental Setup Our study experiments with six corpora: • Reviews: Three corpora containing reviews labeled with sentiment: music reviews from Amazon (He and McAuley, 2016) , and hotel reviews and restaurant reviews from Yelp.", "1 We discarded reviews that had fewer than 10 tokens or a helpfulness/usefulness score of zero.", "The reviews with neutral scores were removed.", "• Twitter: Tweets labeled with whether they indicate that the user received an influenza vaccination (i.e., a flu shot) (Huang et al., 2017) .", "Our experiments require documents to be grouped into time intervals.", "Table 1 shows the intervals for each corpus.", "Documents that fall outside of these time intervals were removed.", "We grouped documents into two types of intervals: • Seasonal: Time intervals within a year (e.g., January through March) that may be repeated across years.", "• Non-seasonal: Time intervals that do not repeat (e.g., 1997-1999) .", "For each dataset, we performed binary classification, implemented in sklearn (Pedregosa et al., 2011) .", "We built logistic regression classifiers with TF-IDF weighted n-gram features (n ∈ {1, 2, 3}), removing features that appeared in less than 2 documents.", "Except when otherwise specified, we held out a random 10% of documents as validation data for each dataset.", "We used Elastic Net (combined 1 and 2 ) regularization (Zou and Hastie, 2005) , and tuned the regularization parameters to maximize performance on the validation data.", "We evaluated the performance using weighted F1 scores.", "How Does Classification Performance Vary with Time?", "We first conduct an analysis of how classifier performance depends on the time intervals in which it is trained and applied.", "For each corpus, we train the classifier on each time interval and test on each time interval.", "We downsampled the training data within each time interval to match the number of documents in the smallest interval, so that differences in performance are not due to the size of the training data.", "In all experiments, we train a classifier on a partition of 80% of the documents in the time interval, and repeat this five times on different partitions, averaging the five F1 scores to produce the final estimate.", "When training and testing on the same interval, we test on the held-out 20% of documents in that interval (standard cross-validation).", "When testing on different time intervals, we test on all documents, since they are all held-out from the training interval; however, we still train on five subsets of 80% of documents, so that the training data is identical across all experiments.", "Finally, to understand why performance varies, we also qualitatively examined how the distribution of content changes across time intervals.", "To measure the distribution of content, we trained a topic model with 20 topics using gensim (Řehůřek and Sojka, 2010) with default parameters.", "We associated each document with one topic (the most probable topic in the document), and then calculated the proportion of each topic within a time period as the proportion of documents in that time period assigned to that topic.", "We can then visualize the extent to which the distribution of 20 topics varies by time.", "Seasonal Variability The top row of Figure 1 shows the test scores from training and testing on each pair of seasonal time intervals for four of the datasets.", "We observe very strong seasonal variations in the economic news corpus, with a drop in F1 score on the order of 10 when there is a mismatch in the season between training and testing.", "There is a similar, but weaker, effect on performance in the music reviews from Amazon and the vaccine tweets.", "There was virtually no difference in performance in any of the pairs in both review corpora from Yelp (restaurants, not pictured, and hotels).", "To help understand why the performance varies, Figure 2 (left) shows the distribution of topics in each seasonal interval for two corpora: Amazon music reviews and Twitter.", "We observe very little variation in the topic distribution across seasons in the Amazon corpus, but some variation in the Twitter corpus, which may explain the large performance differences when testing on held-out seasons in the Twitter data as compared to the Amazon corpus.", "For space, we do not show the descriptions of the topics, but instead only the shape of the distributions to show the degree of variability.", "We did qualitatively examine the differences in word features across the time periods, but had difficulty interpreting the observations and were unable to draw clear conclusions.", "Thus, characterizing the ways in which content distributions vary over time, and why this affects performance, is still an open question.", "Non-seasonal Variability The bottom row of Figure 1 shows the test scores from training and testing on each pair of nonseasonal time intervals.", "A strong pattern emerges in the political parties corpus: F1 scores can drop by as much as 40 points when testing on different time intervals.", "This is perhaps unsurprising, as this collection spans decades, and US party positions have substantially changed over time.", "The performance declines more when testing on time intervals that are further away in time from the training interval, suggesting that changes in party platforms shift gradually over time.", "In contrast, while there was a performance drop when testing outside the training interval in the economic news corpus, the drop was not gradual.", "In the Twitter dataset (not pictured), F1 dropped by an average of 4.9 points outside the training interval.", "We observe an intriguing non-seasonal pattern that is consistent in both of the review corpora from Yelp, but not in the music review corpus from Amazon (not pictured), which is that the classification performance fairly consistently increases over time.", "Since we sampled the dataset so that the time intervals have the same number of reviews, this suggests something else changed over time about the way reviews are written that makes the sentiment easier to detect.", "The right side of Figure 2 shows the topic distribution in the Amazon and Twitter datasets across non-seasonal intervals.", "We observe higher levels of variability across time in the non-seasonal intervals as compared to the seasonal intervals.", "Discussion Overall, it is clear that classifiers generally perform best when applied to the same time interval they were trained.", "Performance diminishes when applied to different time intervals, although different corpora exhibit differ patterns in the way in which the performance diminishes.", "This kind of analysis can be applied to any corpus and could provide insights into characteristics of the corpus that may be helpful when designing a classifier.", "Making Classification Robust to Temporality We now consider how to improve classifiers when working with datasets that span different time intervals.", "We propose to treat this as a domain adaptation problem.", "In domain adaptation, any partition of data that is expected to have a different distribution of features can be treated as a domain (Joshi et al., 2013) .", "Traditionally, domain adaptation is used to adapt models to a common task across rather different sets of data, e.g., a sentiment classifier for different types of products (Blitzer et al., 2007) .", "Recent work has also applied domain adaptation to adjust for potentially more subtle differences in data, such as adapting for differences in the demographics of authors (Volkova et al., 2013; Lynn et al., 2017) .", "We follow the same approach, treating time intervals as domains.", "In our experiments, we use the feature augmentation approach of Daumé III (2007) to perform domain adaptation.", "Each feature is duplicated to have a specific version of the feature for every domain, as well as a domain-independent version of the feature.", "In each instance, the domainindependent feature and the domain-specific feature for that instance's domain have the same feature value, while the value is zeroed out for the domain-specific features for the other domains.", "Data (Seasonal) Baseline Adaptation Reviews (music) .901 .919 Reviews (hotels) .867 .881 Reviews (restaurants) .874 .898 News (economy) .782 .782 Twitter (vaccines) .881 .880 Table 2 : F1 scores when treating each seasonal time interval as a domain and applying domain adaptation compared to using no adaptation.", "This is equivalent to a model where the feature weights are domain specific but share a Gaussian prior across domains (Finkel and Manning, 2009 ).", "This approach is widely used due to its simplicity, and derivatives of this approach have been used in similar work (e.g., (Lynn et al., 2017) ).", "Following Finkel and Manning (2009) , we separately adjust the regularization strength for the domain-independent feature weights and the domain-specific feature weights.", "Seasonal Adaptation We first examine classification performance on the datasets when grouping the seasonal time intervals (January-March, April-June, July-August, September-December) as domains and applying the feature augmentation approach for domain adaptation.", "As a baseline comparison, we apply the same classifier, but without domain adaptation.", "Results are shown in Table 2 .", "We see that applying domain adaptation provides a small boost in three of the datasets, and has no effect on two of the datasets.", "If this pattern holds in other corpora, then this suggests that it does not hurt performance to apply domain adaptation across different times of year, and in some cases can lead to a small performance boost.", "Data (Non-seasonal) Baseline Adaptation Adapt.+seasons Reviews (music) .895 .924 .910 Reviews (hotels) .886 .892 .920 Reviews (restaurants) .", "831 .879 .889 News (economy) .", "763 .780 .859 Politics (platforms) .661 .665 n/a Twitter (vaccines) .910 .903 .920 Table 3 : F1 scores when testing on the final time interval after training on all previous intervals.", "Non-seasonal Adaptation We now consider the non-seasonal time intervals (spans of years).", "In particular, we consider the scenario when one wants to apply a classifier trained on older data to future data.", "This requires a modification to the domain adaptation approach, because future data includes domains that did not exist in the training data, and thus we cannot learn domain-specific feature weights.", "To solve this, we train in the usual way, but when testing on future data, we only include the domain-independent features.", "The intuition is that the domain-independent parameters should be applicable to all domains, and so using only these features should lead to better generalizability to new domains.", "We test this hypothesis by training the classifiers on all but the last time interval, and testing on the final interval.", "For hyperparameter tuning, we used the final time interval of the training data (i.e., the penultimate interval) as the validation set.", "The intuition is that the penultimate interval is the closest to the test data and thus is expected to be most similar to it.", "Results are shown in the first three columns of Table 3 .", "We see that this approach leads to a small performance boost in all cases except the Twitter dataset.", "This means that this simple feature augmentation approach has the potential to make classifiers more robust to future changes in data.", "How to apply the feature augmentation technique to unseen domains is not well understood.", "By removing the domain-specific features, as we did here, the prediction model has changed, and so its behavior may be hard to predict.", "Nonetheless, this appears to be a successful approach.", "Adding Seasonal Features We also experimented with including the seasonal features when performing non-seasonal adaptation.", "In this setting, we train the models with two domain-specific features in addition to the domain-independent features: one for the season, and one for the non-seasonal interval.", "As above, we remove the non-seasonal features at test time; however, we retain the season-specific features in addition to the domain-independent features, as they can be reused in future years.", "The results of this approach are shown in the last column of Table 3 .", "We find that combining seasonal and non-seasonal features together leads to an additional performance gain in most cases.", "Conclusion Our experiments suggest that time can substantially affect the performance of document classification, and practitioners should be cognizant of this variable when developing classifiers.", "A simple analysis comparing pairs of time intervals can provide insights into how performance varies with time, which could be a good practice to do when initially working with a corpus.", "Our experiments also suggest that simple domain adaptation techniques can help account for this variation.", "4 We make two practical recommendations following the insights from this work.", "First, evaluation will be most accurate if the test data is as similar as possible to whatever future data the classifier will be applied to, and one way to achieve this is to select test data from the chronological end of the corpus, rather than randomly sampling data without regard to time.", "Second, we observed that performance on future data tends to increase when hyperparameter tuning is conducted on later data; thus, we also recommend sampling validation data from the chronological end of the corpus." ] }
{ "paper_header_number": [ "1", "1.1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.2.1", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets and Experimental Setup", "How Does Classification Performance", "Seasonal Variability", "Non-seasonal Variability", "Discussion", "Making Classification Robust to Temporality", "Seasonal Adaptation", "Non-seasonal Adaptation", "Adding Seasonal Features", "Conclusion" ] }
GEM-SciDuet-train-79#paper-1205#slide-2
RQ1 How does performance vary
Train and test on each time period Measure how performance drops when the test period is different Balanced so each time period has same # of documents Yelp reviews are getting more informative over time? This type of analysis can reveal characteristics of corpus Unanswered: why does performance vary?
Train and test on each time period Measure how performance drops when the test period is different Balanced so each time period has same # of documents Yelp reviews are getting more informative over time? This type of analysis can reveal characteristics of corpus Unanswered: why does performance vary?
[]
GEM-SciDuet-train-79#paper-1205#slide-3
1205
Examining Temporality in Document Classification
Many corpora span broad periods of time. Language processing models trained during one time period may not work well in future time periods, and the best model may depend on specific times of year (e.g., people might describe hotels differently in reviews during the winter versus the summer). This study investigates how document classifiers trained on documents from certain time intervals perform on documents from other time intervals, considering both seasonal intervals (intervals that repeat across years, e.g., winter) and non-seasonal intervals (e.g., specific years). We show experimentally that classification performance varies over time, and that performance can be improved by using a standard domain adaptation approach to adjust for changes in time.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111 ], "paper_content_text": [ "Introduction Language, and therefore data derived from language, changes over time (Ullmann, 1962) .", "Word senses can shift over long periods of time (Wilkins, 1993; Wijaya and Yeniterzi, 2011; Hamilton et al., 2016) , and written language can change rapidly in online platforms (Eisenstein et al., 2014; Goel et al., 2016) .", "However, little is known about how shifts in text over time affect the performance of language processing systems.", "This paper focuses on a standard text processing task, document classification, to provide insight into how classification performance varies with time.", "We consider both long-term variations in text over time and seasonal variations which change throughout a year but repeat across years.", "Our empirical study considers corpora contain-ing formal text spanning decades as well as usergenerated content spanning only a few years.", "After describing the datasets and experiment design, this paper has two main sections, respectively addressing the following research questions: 1.", "In what ways does document classification depend on the timestamps of the documents?", "2.", "Can document classifiers be adapted to perform better in time-varying corpora?", "To address question 1, we train and test on data from different time periods, to understand how performance varies with time.", "To address question 2, we apply a domain adaptation approach, treating time intervals as domains.", "We show that in most cases this approach can lead to improvements in classification performance, even on future time intervals.", "Related Work Time is implicitly embedded in the classification process: classifiers are often built to be applied to future data that doesn't yet exist, and performance on held-out data is measured to estimate performance on future data whose distribution may have changed.", "Methods exist to adjust for changes in the data distribution (covariate shift) (Shimodaira, 2000; Bickel et al., 2009 ), but time is not typically incorporated into such methods explicitly.", "One line of work that explicitly studies the relationship between time and the distribution of data is work on classifying the time period in which a document was written (document dating) (Kanhabua and Nørvåg, 2008; Chambers, 2012; Kotsakos et al., 2014 ).", "However, this task is directed differently from our work: predicting timestamps given documents, rather than predicting information about documents given timestamps.", "Dataset Time intervals (non-seasonal) Time intervals (seasonal) Size Reviews (music) 1997-99, 2000-02, 2003-05, 2006-08, 2009-11, 2012-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 653K Reviews (hotels) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 78.6K Reviews (restaurants) 2005-08, 2009-11, 2012-14, 2015-17 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 1.16M News (economy) 1950-70, 1971-85, 1986-2000, 2001-14 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 6.29K Politics (platforms) 1948 (platforms) -56, 1960 (platforms) -68, 1972 (platforms) -80, 1984 (platforms) -92, 1996 (platforms) -2004 (platforms) , 2008 (platforms) -16 n/a 35.8K Twitter (vaccines) 2013 (platforms) , 2014 (platforms) , 2015 (platforms) , 2016 Jan-Mar, Apr-Jun, Jul-Sep, Oct-Dec 9.83K Table 1 : Descriptions of corpora spanning multiple time intervals.", "Size is the number of documents.", "Datasets and Experimental Setup Our study experiments with six corpora: • Reviews: Three corpora containing reviews labeled with sentiment: music reviews from Amazon (He and McAuley, 2016) , and hotel reviews and restaurant reviews from Yelp.", "1 We discarded reviews that had fewer than 10 tokens or a helpfulness/usefulness score of zero.", "The reviews with neutral scores were removed.", "• Twitter: Tweets labeled with whether they indicate that the user received an influenza vaccination (i.e., a flu shot) (Huang et al., 2017) .", "Our experiments require documents to be grouped into time intervals.", "Table 1 shows the intervals for each corpus.", "Documents that fall outside of these time intervals were removed.", "We grouped documents into two types of intervals: • Seasonal: Time intervals within a year (e.g., January through March) that may be repeated across years.", "• Non-seasonal: Time intervals that do not repeat (e.g., 1997-1999) .", "For each dataset, we performed binary classification, implemented in sklearn (Pedregosa et al., 2011) .", "We built logistic regression classifiers with TF-IDF weighted n-gram features (n ∈ {1, 2, 3}), removing features that appeared in less than 2 documents.", "Except when otherwise specified, we held out a random 10% of documents as validation data for each dataset.", "We used Elastic Net (combined 1 and 2 ) regularization (Zou and Hastie, 2005) , and tuned the regularization parameters to maximize performance on the validation data.", "We evaluated the performance using weighted F1 scores.", "How Does Classification Performance Vary with Time?", "We first conduct an analysis of how classifier performance depends on the time intervals in which it is trained and applied.", "For each corpus, we train the classifier on each time interval and test on each time interval.", "We downsampled the training data within each time interval to match the number of documents in the smallest interval, so that differences in performance are not due to the size of the training data.", "In all experiments, we train a classifier on a partition of 80% of the documents in the time interval, and repeat this five times on different partitions, averaging the five F1 scores to produce the final estimate.", "When training and testing on the same interval, we test on the held-out 20% of documents in that interval (standard cross-validation).", "When testing on different time intervals, we test on all documents, since they are all held-out from the training interval; however, we still train on five subsets of 80% of documents, so that the training data is identical across all experiments.", "Finally, to understand why performance varies, we also qualitatively examined how the distribution of content changes across time intervals.", "To measure the distribution of content, we trained a topic model with 20 topics using gensim (Řehůřek and Sojka, 2010) with default parameters.", "We associated each document with one topic (the most probable topic in the document), and then calculated the proportion of each topic within a time period as the proportion of documents in that time period assigned to that topic.", "We can then visualize the extent to which the distribution of 20 topics varies by time.", "Seasonal Variability The top row of Figure 1 shows the test scores from training and testing on each pair of seasonal time intervals for four of the datasets.", "We observe very strong seasonal variations in the economic news corpus, with a drop in F1 score on the order of 10 when there is a mismatch in the season between training and testing.", "There is a similar, but weaker, effect on performance in the music reviews from Amazon and the vaccine tweets.", "There was virtually no difference in performance in any of the pairs in both review corpora from Yelp (restaurants, not pictured, and hotels).", "To help understand why the performance varies, Figure 2 (left) shows the distribution of topics in each seasonal interval for two corpora: Amazon music reviews and Twitter.", "We observe very little variation in the topic distribution across seasons in the Amazon corpus, but some variation in the Twitter corpus, which may explain the large performance differences when testing on held-out seasons in the Twitter data as compared to the Amazon corpus.", "For space, we do not show the descriptions of the topics, but instead only the shape of the distributions to show the degree of variability.", "We did qualitatively examine the differences in word features across the time periods, but had difficulty interpreting the observations and were unable to draw clear conclusions.", "Thus, characterizing the ways in which content distributions vary over time, and why this affects performance, is still an open question.", "Non-seasonal Variability The bottom row of Figure 1 shows the test scores from training and testing on each pair of nonseasonal time intervals.", "A strong pattern emerges in the political parties corpus: F1 scores can drop by as much as 40 points when testing on different time intervals.", "This is perhaps unsurprising, as this collection spans decades, and US party positions have substantially changed over time.", "The performance declines more when testing on time intervals that are further away in time from the training interval, suggesting that changes in party platforms shift gradually over time.", "In contrast, while there was a performance drop when testing outside the training interval in the economic news corpus, the drop was not gradual.", "In the Twitter dataset (not pictured), F1 dropped by an average of 4.9 points outside the training interval.", "We observe an intriguing non-seasonal pattern that is consistent in both of the review corpora from Yelp, but not in the music review corpus from Amazon (not pictured), which is that the classification performance fairly consistently increases over time.", "Since we sampled the dataset so that the time intervals have the same number of reviews, this suggests something else changed over time about the way reviews are written that makes the sentiment easier to detect.", "The right side of Figure 2 shows the topic distribution in the Amazon and Twitter datasets across non-seasonal intervals.", "We observe higher levels of variability across time in the non-seasonal intervals as compared to the seasonal intervals.", "Discussion Overall, it is clear that classifiers generally perform best when applied to the same time interval they were trained.", "Performance diminishes when applied to different time intervals, although different corpora exhibit differ patterns in the way in which the performance diminishes.", "This kind of analysis can be applied to any corpus and could provide insights into characteristics of the corpus that may be helpful when designing a classifier.", "Making Classification Robust to Temporality We now consider how to improve classifiers when working with datasets that span different time intervals.", "We propose to treat this as a domain adaptation problem.", "In domain adaptation, any partition of data that is expected to have a different distribution of features can be treated as a domain (Joshi et al., 2013) .", "Traditionally, domain adaptation is used to adapt models to a common task across rather different sets of data, e.g., a sentiment classifier for different types of products (Blitzer et al., 2007) .", "Recent work has also applied domain adaptation to adjust for potentially more subtle differences in data, such as adapting for differences in the demographics of authors (Volkova et al., 2013; Lynn et al., 2017) .", "We follow the same approach, treating time intervals as domains.", "In our experiments, we use the feature augmentation approach of Daumé III (2007) to perform domain adaptation.", "Each feature is duplicated to have a specific version of the feature for every domain, as well as a domain-independent version of the feature.", "In each instance, the domainindependent feature and the domain-specific feature for that instance's domain have the same feature value, while the value is zeroed out for the domain-specific features for the other domains.", "Data (Seasonal) Baseline Adaptation Reviews (music) .901 .919 Reviews (hotels) .867 .881 Reviews (restaurants) .874 .898 News (economy) .782 .782 Twitter (vaccines) .881 .880 Table 2 : F1 scores when treating each seasonal time interval as a domain and applying domain adaptation compared to using no adaptation.", "This is equivalent to a model where the feature weights are domain specific but share a Gaussian prior across domains (Finkel and Manning, 2009 ).", "This approach is widely used due to its simplicity, and derivatives of this approach have been used in similar work (e.g., (Lynn et al., 2017) ).", "Following Finkel and Manning (2009) , we separately adjust the regularization strength for the domain-independent feature weights and the domain-specific feature weights.", "Seasonal Adaptation We first examine classification performance on the datasets when grouping the seasonal time intervals (January-March, April-June, July-August, September-December) as domains and applying the feature augmentation approach for domain adaptation.", "As a baseline comparison, we apply the same classifier, but without domain adaptation.", "Results are shown in Table 2 .", "We see that applying domain adaptation provides a small boost in three of the datasets, and has no effect on two of the datasets.", "If this pattern holds in other corpora, then this suggests that it does not hurt performance to apply domain adaptation across different times of year, and in some cases can lead to a small performance boost.", "Data (Non-seasonal) Baseline Adaptation Adapt.+seasons Reviews (music) .895 .924 .910 Reviews (hotels) .886 .892 .920 Reviews (restaurants) .", "831 .879 .889 News (economy) .", "763 .780 .859 Politics (platforms) .661 .665 n/a Twitter (vaccines) .910 .903 .920 Table 3 : F1 scores when testing on the final time interval after training on all previous intervals.", "Non-seasonal Adaptation We now consider the non-seasonal time intervals (spans of years).", "In particular, we consider the scenario when one wants to apply a classifier trained on older data to future data.", "This requires a modification to the domain adaptation approach, because future data includes domains that did not exist in the training data, and thus we cannot learn domain-specific feature weights.", "To solve this, we train in the usual way, but when testing on future data, we only include the domain-independent features.", "The intuition is that the domain-independent parameters should be applicable to all domains, and so using only these features should lead to better generalizability to new domains.", "We test this hypothesis by training the classifiers on all but the last time interval, and testing on the final interval.", "For hyperparameter tuning, we used the final time interval of the training data (i.e., the penultimate interval) as the validation set.", "The intuition is that the penultimate interval is the closest to the test data and thus is expected to be most similar to it.", "Results are shown in the first three columns of Table 3 .", "We see that this approach leads to a small performance boost in all cases except the Twitter dataset.", "This means that this simple feature augmentation approach has the potential to make classifiers more robust to future changes in data.", "How to apply the feature augmentation technique to unseen domains is not well understood.", "By removing the domain-specific features, as we did here, the prediction model has changed, and so its behavior may be hard to predict.", "Nonetheless, this appears to be a successful approach.", "Adding Seasonal Features We also experimented with including the seasonal features when performing non-seasonal adaptation.", "In this setting, we train the models with two domain-specific features in addition to the domain-independent features: one for the season, and one for the non-seasonal interval.", "As above, we remove the non-seasonal features at test time; however, we retain the season-specific features in addition to the domain-independent features, as they can be reused in future years.", "The results of this approach are shown in the last column of Table 3 .", "We find that combining seasonal and non-seasonal features together leads to an additional performance gain in most cases.", "Conclusion Our experiments suggest that time can substantially affect the performance of document classification, and practitioners should be cognizant of this variable when developing classifiers.", "A simple analysis comparing pairs of time intervals can provide insights into how performance varies with time, which could be a good practice to do when initially working with a corpus.", "Our experiments also suggest that simple domain adaptation techniques can help account for this variation.", "4 We make two practical recommendations following the insights from this work.", "First, evaluation will be most accurate if the test data is as similar as possible to whatever future data the classifier will be applied to, and one way to achieve this is to select test data from the chronological end of the corpus, rather than randomly sampling data without regard to time.", "Second, we observed that performance on future data tends to increase when hyperparameter tuning is conducted on later data; thus, we also recommend sampling validation data from the chronological end of the corpus." ] }
{ "paper_header_number": [ "1", "1.1", "2", "3", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "4.2.1", "5" ], "paper_header_content": [ "Introduction", "Related Work", "Datasets and Experimental Setup", "How Does Classification Performance", "Seasonal Variability", "Non-seasonal Variability", "Discussion", "Making Classification Robust to Temporality", "Seasonal Adaptation", "Non-seasonal Adaptation", "Adding Seasonal Features", "Conclusion" ] }
GEM-SciDuet-train-79#paper-1205#slide-3
RQ2 Can we adapt to temporal variations
Address this as a domain adaptation problem Treat explicitly-defined time periods as domains Feature augmentation method from Daume III (2007) Domain-specific copies of the feature set: General Jan-Mar Apr-Jun Jul-Sep Oct-Dec Straightforward to apply to seasonal features: How to use in non-seasonal settings? Separately weigh domain-specific features During training: weigh domain-specific features differently Can also combine with seasonal domains 3 copies of each feature (general, year-specific, season-specific) Simulating performance on future data: Train in initial time periods Tune on second-to-last period Test on final time period Simple-to-implement adaptation can make classifiers more Suggestion: tune hyperparameters on heldout data from the chronological end of your corpus (cf. cross-validation) Can lead to better performance on future data
Address this as a domain adaptation problem Treat explicitly-defined time periods as domains Feature augmentation method from Daume III (2007) Domain-specific copies of the feature set: General Jan-Mar Apr-Jun Jul-Sep Oct-Dec Straightforward to apply to seasonal features: How to use in non-seasonal settings? Separately weigh domain-specific features During training: weigh domain-specific features differently Can also combine with seasonal domains 3 copies of each feature (general, year-specific, season-specific) Simulating performance on future data: Train in initial time periods Tune on second-to-last period Test on final time period Simple-to-implement adaptation can make classifiers more Suggestion: tune hyperparameters on heldout data from the chronological end of your corpus (cf. cross-validation) Can lead to better performance on future data
[]
GEM-SciDuet-train-80#paper-1206#slide-0
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-0
Parsing by Local Decisions
The cat took a nap (S (NP The cat (VP
The cat took a nap (S (NP The cat (VP
[]
GEM-SciDuet-train-80#paper-1206#slide-1
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-1
Non local Consequences
NP NP VP NP The cat took a nap . The cat took a nap . (S (NP The cat Prediction (S (NP (VP
NP NP VP NP The cat took a nap . The cat took a nap . (S (NP The cat Prediction (S (NP (VP
[]
GEM-SciDuet-train-80#paper-1206#slide-2
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-2
Dynamic Oracle Training
Explore at training time. Supervise each state with an expert policy. True Parse (S (NP The cat Prediction (S (NP (VP The Oracle (NP The The cat choose log to maximize achievable F1 (typically)
Explore at training time. Supervise each state with an expert policy. True Parse (S (NP The cat Prediction (S (NP (VP The Oracle (NP The The cat choose log to maximize achievable F1 (typically)
[]
GEM-SciDuet-train-80#paper-1206#slide-3
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-3
Dynamic Oracles Help
Expert Policies / Dynamic Oracles PTB Constituency Parsing F1 Coavoux and Crabbe, 2016
Expert Policies / Dynamic Oracles PTB Constituency Parsing F1 Coavoux and Crabbe, 2016
[]
GEM-SciDuet-train-80#paper-1206#slide-4
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-4
Reinforcement Learning Helps in other tasks
CCG several, machine parsing including translation dependency parsing
CCG several, machine parsing including translation dependency parsing
[]
GEM-SciDuet-train-80#paper-1206#slide-5
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-5
Policy Gradient Training
Minimize expected sequence-level cost: NP NP NP NP NP NP NP NP The man had an idea. The man had an idea. (compute by sampling) addresses loss mismatch compute in the same way as for the Input, The cat took a nap. S S-INV S S k candidates, NP VP VP ADJP VP The cat took a nap . The cat took a nap . The cat took a nap . The cat took a nap .
Minimize expected sequence-level cost: NP NP NP NP NP NP NP NP The man had an idea. The man had an idea. (compute by sampling) addresses loss mismatch compute in the same way as for the Input, The cat took a nap. S S-INV S S k candidates, NP VP VP ADJP VP The cat took a nap . The cat took a nap . The cat took a nap . The cat took a nap .
[]
GEM-SciDuet-train-80#paper-1206#slide-8
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-8
English PTB F1
Static oracle Policy gradient Dynamic oracle
Static oracle Policy gradient Dynamic oracle
[]
GEM-SciDuet-train-80#paper-1206#slide-9
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-9
Training Efficiency
PTB learning curves for the Top-Down parser Development F1 static oracle dynamic oracle policy gradient
PTB learning curves for the Top-Down parser Development F1 static oracle dynamic oracle policy gradient
[]
GEM-SciDuet-train-80#paper-1206#slide-10
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-10
French Treebank F1
Static oracle Policy gradient Dynamic oracle
Static oracle Policy gradient Dynamic oracle
[]
GEM-SciDuet-train-80#paper-1206#slide-11
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-11
Chinese Penn Treebank v51 F1
Static oracle Policy gradient Dynamic oracle
Static oracle Policy gradient Dynamic oracle
[]
GEM-SciDuet-train-80#paper-1206#slide-12
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-12
Conclusions
Local decisions can have non-local consequences How to deal with the issues caused by local decisions? Dynamic oracles: efficient, model specific Policy gradient: slower to train, but general purpose
Local decisions can have non-local consequences How to deal with the issues caused by local decisions? Dynamic oracles: efficient, model specific Policy gradient: slower to train, but general purpose
[]
GEM-SciDuet-train-80#paper-1206#slide-13
1206
Policy Gradient as a Proxy for Dynamic Oracles in Constituency Parsing
Dynamic oracles provide strong supervision for training constituency parsers with exploration, but must be custom defined for a given parser's transition system. We explore using a policy gradient method as a parser-agnostic alternative. In addition to directly optimizing for a tree-level metric such as F1, policy gradient has the potential to reduce exposure bias by allowing exploration during training; moreover, it does not require a dynamic oracle for supervision. On four constituency parsers in three languages, the method substantially outperforms static oracle likelihood training in almost all settings. For parsers where a dynamic oracle is available (including a novel oracle which we define for the transition system of Dyer et al. (2016) ), policy gradient typically recaptures a substantial fraction of the performance gain afforded by the dynamic oracle.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105 ], "paper_content_text": [ "Introduction Many recent state-of-the-art models for constituency parsing are transition based, decomposing production of each parse tree into a sequence of action decisions Cross and Huang, 2016; Liu and Zhang, 2017; , building on a long line of work in transition-based parsing (Nivre, 2003; Yamada and Matsumoto, 2003; Henderson, 2004; Zhang and Clark, 2011; Chen and Manning, 2014; Andor et al., 2016; Kiperwasser and Goldberg, 2016) .", "However, models of this type, which decompose structure prediction into sequential decisions, can be prone to two issues (Ranzato et al., 2016; Wiseman and Rush, 2016) .", "The first is exposure bias: if, at training time, the model only observes states resulting from correct past decisions, it will not be prepared to recover from its own mistakes during prediction.", "Second is the loss mismatch between the action-level loss used at training and any structure-level evaluation metric, for example F1.", "A large family of techniques address the exposure bias problem by allowing the model to make mistakes and explore incorrect states during training, supervising actions at the resulting states using an expert policy (Daumé III et al., 2009; Ross et al., 2011; Choi and Palmer, 2011; Chang et al., 2015) ; these expert policies are typically referred to as dynamic oracles in parsing (Goldberg and Nivre, 2012; .", "While dynamic oracles have produced substantial improvements in constituency parsing performance (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , they must be custom designed for each transition system.", "To address the loss mismatch problem, another line of work has directly optimized for structurelevel cost functions (Goodman, 1996; Och, 2003) .", "Recent methods applied to models that produce output sequentially commonly use policy gradient (Auli and Gao, 2014; Ranzato et al., 2016; Shen et al., 2016) or beam search (Xu et al., 2016; Wiseman and Rush, 2016; Edunov et al., 2017) at training time to minimize a structured cost.", "These methods also reduce exposure bias through exploration but do not require an expert policy for supervision.", "In this work, we apply a simple policy gradient method to train four different state-of-theart transition-based constituency parsers to maximize expected F1.", "We compare against training with a dynamic oracle (both to supervise exploration and provide loss-augmentation) where one is available, including a novel dynamic oracle that we define for the top-down transition system of .", "We find that while policy gradient usually outperforms standard likelihood training, it typically underperforms the dynamic oracle-based methods -which provide direct, model-aware supervision about which actions are best to take from arbitrary parser states.", "However, a substantial fraction of each dynamic oracle's performance gain is often recovered using the model-agnostic policy gradient method.", "In the process, we obtain new state-of-the-art results for single-model discriminative transition-based parsers trained on the English PTB (92.6 F1), French Treebank (83.5 F1), and Penn Chinese Treebank Version 5.1 (87.0 F1).", "Models The transition-based parsers we use all decompose production of a parse tree y for a sentence x into a sequence of actions (a 1 , .", ".", ".", "a T ) and resulting states (s 1 , .", ".", ".", "s T +1 ).", "Actions a t are predicted sequentially, conditioned on a representation of the parser's current state s t and parameters θ: p(y|x; θ) = T t=1 p(a t | s t ; θ) (1) We investigate four parsers with varying transition systems and methods of encoding the current state and sentence: (1) the discriminative Recurrent Neural Network Grammars (RNNG) parser of , (2) the In-Order parser of Liu and Zhang (2017) , (3) the Span-Based parser of Cross and Huang (2016) , and (4) the Top-Down parser of .", "1 We refer to the original papers for descriptions of the transition systems and model parameterizations.", "Training Procedures Likelihood training without exploration maximizes Eq.", "1 for trees in the training corpus, but may be prone to exposure bias and loss mismatch (Section 1).", "Dynamic oracle methods are known to improve on this training procedure for a variety of parsers (Coavoux and Crabbé, 2016; Cross and Huang, 2016; González and Gómez-Rodríguez, 2018) , supervising exploration during training by providing the parser with the best action to take at each explored state.", "We describe how policy gradient can be applied as an oracle-free alternative.", "We then compare to several variants of dynamic oracle training which focus on addressing exposure bias, loss mismatch, or both.", "Policy Gradient Given an arbitrary cost function ∆ comparing structured outputs (e.g.", "negative labeled F1, for trees), we use the risk objective: R(θ) = N i=1 y p(y | x (i) ; θ)∆(y, y (i) ) which measures the model's expected cost over possible outputs y for each of the training examples (x (1) , y (1) ), .", ".", ".", ", (x (N ) , y (N ) ).", "Minimizing a risk objective has a long history in structured prediction (Povey and Woodland, 2002; Smith and Eisner, 2006; Li and Eisner, 2009; Gimpel and Smith, 2010) but often relies on the cost function decomposing according to the output structure.", "However, we can avoid any restrictions on the cost using reinforcement learning-style approaches (Xu et al., 2016; Shen et al., 2016; Edunov et al., 2017) where cost is ascribed to the entire output structure -albeit at the expense of introducing a potentially difficult credit assignment problem.", "The policy gradient method we apply is a simple variant of REINFORCE (Williams, 1992) .", "We perform mini-batch gradient descent on the gradient of the risk objective: ∇R(θ) = N i=1 y p(y|x (i) )∆(y, y (i) )∇ log p(y|x (i) ; θ) ≈ N i=1 y∈Y(x (i) ) ∆(y, y (i) )∇ log p(y|x (i) ; θ) where Y(x (i) ) is a set of k candidate trees obtained by sampling from the model's distribution for sentence x (i) .", "We use negative labeled F1 for ∆.", "To reduce the variance of the gradient estimates, we standardize ∆ using its running mean and standard deviation across all candidates used so far throughout training.", "Following Shen et al.", "(2016) , we also found better performance when including the gold tree y (i) in the set of k candidates Y(x (i) ), and do so for all experiments reported here.", "2 Dynamic Oracle Supervision For a given parser state s t , a dynamic oracle defines an action a * (s t ) which should be taken to incrementally produce the best tree still reachable from that state.", "3 Dynamic oracles provide strong supervision for training with exploration, but require custom design for a given transition system.", "Cross and Huang (2016) and defined optimal (with respect to F1) dynamic oracles for their respective transition systems, and below we define a novel dynamic oracle for the top-down system of RNNG.", "In RNNG, tree production occurs in a stackbased, top-down traversal which produces a leftto-right linearized representation of the tree using three actions: OPEN a labeled constituent (which fixes the constituent's span to begin at the next word in the sentence which has not been shifted), SHIFT the next word in the sentence to add it to the current constituent, or CLOSE the current constituent (which fixes its span to end after the last word that has been shifted).", "The parser stores opened constituents on the stack, and must therefore close them in the reverse of the order that they were opened.", "At a given parser state, our oracle does the following: 1.", "If there are any open constituents on the stack which can be closed (i.e.", "have had a word shifted since being opened), check the topmost of these (the one that has been opened most recently).", "If closing it would produce a constituent from the the gold tree that has not yet been produced (which is determined by the constituent's label, span beginning position, and the number of words currently shifted), or if the constituent could not be closed at a later position in the sentence to produce a constituent in the gold tree, return CLOSE.", "the estimate of the risk objective's gradient; however since in the parsing tasks we consider, the gold tree has constant and minimal cost, augmenting with the gold is equivalent to jointly optimizing the standard likelihood and risk objectives, using an adaptive scaling factor for each objective that is dependent on the cost for the trees that have been sampled from the model.", "We found that including the gold candidate in this manner outperformed initial experiments that first trained a model using likelihood training and then fine-tuned using unbiased policy gradient.", "3 More generally, an oracle can return a set of such actions that could be taken from the current state, but the oracles we use select a single canonical action.", "2.", "Otherwise, if there are constituents in the gold tree which have not yet been opened in the parser state, with span beginning at the next unshifted word, OPEN the outermost of these.", "3.", "Otherwise, SHIFT the next word.", "While we do not claim that this dynamic oracle is optimal with respect to F1, we find that it still helps substantially in supervising exploration (Section 5).", "Likelihood Training with Exploration Past work has differed on how to use dynamic oracles to guide exploration during oracle training Cross and Huang, 2016; .", "We use the same sample-based method of generating candidate sets Y as for policy gradient, which allows us to control the dynamic oracle and policy gradient methods to perform an equal amount of exploration.", "Likelihood training with exploration then maximizes the sum of the log probabilities for the oracle actions for all states composing the candidate trees: L E (θ) = N i=1 y∈Y(x (i) ) s∈y log p(a * (s) | s) where a * (s) is the dynamic oracle's action for state s. Softmax Margin Softmax margin loss (Gimpel and Smith, 2010; Auli and Lopez, 2011) addresses loss mismatch by incorporating task cost into the training loss.", "Since trees are decomposed into a sequence of local action predictions, we cannot use a global cost, such as F1, directly.", "As a proxy, we rely on the dynamic oracles' action-level supervision.", "In all models we consider, action probabilities (Eq.", "1) are parameterized by a softmax function p M L (a | s t ; θ) ∝ exp(z(a, s t , θ)) for some state-action scoring function z.", "The softmax-margin objective replaces this by p SM M (a | s t ; θ) ∝ exp(z(a, s t , θ) + ∆(a, a * t )) (2) We use ∆(a, a * t ) = 0 if a = a * t and 1 otherwise.", "This can be viewed as a \"soft\" version of the maxmargin objective used by for training without exploration, but retains a locallynormalized model that we can use for samplingbased exploration.", "Softmax Margin with Exploration Finally, we train using a combination of softmax margin loss augmentation and exploration.", "We perform the same sample-based candidate generation as for policy gradient and likelihood training with exploration, but use Eq.", "2 to compute the training loss for candidate states.", "For those parsers that have a dynamic oracle, this provides a means of training that more directly provides both exploration and cost-aware losses.", "Experiments We compare the constituency parsers listed in Section 2 using the above training methods.", "Our experiments use the English PTB (Marcus et al., 1993) , French Treebank (Abeillé et al., 2003) , and Penn Chinese Treebank (CTB) Version 5.1 (Xue et al., 2005) .", "Training To compare the training procedures as closely as possible, we train all models for a given parser in a given language from the same randomly-initialized parameter values.", "We train two different versions of the RNNG model: one model using size 128 for the LSTMs and hidden states (following the original work), and a larger model with size 256.", "We perform evaluation using greedy search in the Span-Based and Top-Down parsers, and beam search with beam size 10 for the RNNG and In-Order parsers.", "We found that beam search improved performance for these two parsers by around 0.1-0.3 F1 on the development sets, and use it at inference time in every setting for these two parsers.", "In our experiments, policy gradient typically requires more epochs of training to reach performance comparable to either of the dynamic oraclebased exploration methods.", "Figure 1 gives a typical learning curve, for the Top-Down parser on English.", "We found that policy gradient is also more sensitive to the number of candidates sampled per sentence than either of the other exploration methods, with best performance on the development set usually obtained with k = 10 for k ∈ {2, 5, 10} (where k also counts the sentence's gold tree, included in the candidate set).", "See Appendix A in the supplemental material for the values of k used.", "Tags, Embeddings, and Morphology We largely follow previous work for each parser in our use of predicted part-of-speech tags, pretrained word embeddings, and morphological features.", "All parsers use predicted part-of-speech tags as part of their sentence representations.", "For English and Chinese, we follow the setup of Cross and Huang (2016) : training the Stanford tagger (Toutanova et al., 2003) on the training set of each parsing corpus to predict development and test set tags, and using 10-way jackknifing to predict tags for the training set.", "For French, we use the predicted tags and morphological features provided with the SPMRL dataset (Seddah et al., 2014) .", "We modified the publicly released code for all parsers to use predicted morphological features for French.", "We follow the approach outlined by Cross and Huang (2016) and for representing morphological features as learned embeddings, and use the same dimensions for these embeddings as in their papers.", "For RNNG and In-Order, we similarly use 10-dimensional learned embeddings for each morphological feature, feeding them as LSTM inputs for each word alongside the word and part-of-speech tag embeddings.", "For RNNG and the In-Order parser, we use the same word embeddings as the original papers for English and Chinese, and train 100-dimensional word embeddings for French using the structured skip-gram method of Ling et al.", "(2015) on French Wikipedia.", "Table 1 compares parser F1 by training procedure for each language.", "Policy gradient improves upon likelihood training in 14 out of 15 cases, with improvements of up to 1.5 F1.", "One of the three dynamic oracle-based training methods -either likelihood with exploration, softmax margin (SMM), or softmax margin with exploration -obtains better performance than policy gradient in 10 out of 12 cases.", "This is perhaps unsurprising given the strong supervision provided by the dynamic oracles and the credit assignment problem faced by policy gradient.", "However, a substantial fraction of this performance gain is recaptured by policy gradient in most cases.", "Results and Discussion While likelihood training with exploration using a dynamic oracle more directly addresses exploration bias, and softmax margin training more directly addresses loss mismatch, these two phenomena are still entangled, and the best dynamic oracle-based method to use varies.", "The effectiveness of the oracle method is also likely to be influenced by the nature of the dynamic oracle available for the parser.", "For example, the oracle for RNNG lacks F1 optimality guarantees, and softmax margin without exploration often underperforms likelihood for this parser.", "However, exploration improves softmax margin training across all parsers and conditions.", "Although results from likelihood training are mostly comparable between RNNG-128 and the larger model RNNG-256 across languages, policy gradient and likelihood training with exploration both typically yield larger improvements in the larger models, obtaining 92.6 F1 for English and 86.0 for Chinese (using likelihood training with exploration), although results are slightly higher for the policy gradient and dynamic oracle-based methods for the smaller model on French (including 83.5 with softmax margin with exploration).", "Finally, we observe that policy gradient also provides large improvements for the In-Order parser, where a dynamic oracle has not been defined.", "We note that although some of these results (92.6 for English, 83.5 for French, 87.0 for Chinese) are state-of-the-art for single model, discriminative transition-based parsers, other work on constituency parsing achieves better performance through other methods.", "Techniques that combine multiple models or add semi-supervised data (Vinyals et al., 2015; Choe and Charniak, 2016; Kuncoro et al., 2017; Liu and Zhang, 2017; Fried et al., 2017) are orthogonal to, and could be combined with, the singlemodel, fixed training data methods we explore.", "Other recent work (Gaddy et al., 2018; Kitaev and Klein, 2018) obtains comparable or stronger performance with global chart decoders, where training uses loss augmentation provided by an oracle.", "By performing model-optimal global inference, these parsers likely avoid the exposure bias problem of the sequential transition-based parsers we investigate, at the cost of requiring a chart decoding procedure for inference.", "Overall, we find that although optimizing for F1 in a model-agnostic fashion with policy gradient typically underperforms the model-aware expert supervision given by the dynamic oracle training methods, it provides a simple method for consistently improving upon static oracle likelihood training, at the expense of increased training costs." ] }
{ "paper_header_number": [ "1", "2", "3", "3.1", "3.2", "4", "5" ], "paper_header_content": [ "Introduction", "Models", "Training Procedures", "Policy Gradient", "Dynamic Oracle Supervision", "Experiments", "Results and Discussion" ] }
GEM-SciDuet-train-80#paper-1206#slide-13
For Comparison A Novel Oracle for RNNG
(S (NP The man (VP had 1. Close current constituent if its a true constituent or it could never be a true constituent. (S (VP (NP The man 2. Otherwise, open the outermost unopened true constituent at this position. 3. Otherwise, shift the next word.
(S (NP The man (VP had 1. Close current constituent if its a true constituent or it could never be a true constituent. (S (VP (NP The man 2. Otherwise, open the outermost unopened true constituent at this position. 3. Otherwise, shift the next word.
[]
GEM-SciDuet-train-81#paper-1211#slide-0
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-0
Introduction
I Phrase-based decoding without further constraints is NP-hard I Proof: reduction from the travelling salesman problem I Hard distortion limit is commonly imposed in PBMT systems I Is phrase-based decoding with a fixed distortion limit NP-hard A related problem: bandwidth-limited TSP This work: a new decoding algorithm I Process the source word from left-to-right I Maintain multiple tapes in the target side I Run time: O(nd!lhd+1) n: source sentence length d : distortion limit
I Phrase-based decoding without further constraints is NP-hard I Proof: reduction from the travelling salesman problem I Hard distortion limit is commonly imposed in PBMT systems I Is phrase-based decoding with a fixed distortion limit NP-hard A related problem: bandwidth-limited TSP This work: a new decoding algorithm I Process the source word from left-to-right I Maintain multiple tapes in the target side I Run time: O(nd!lhd+1) n: source sentence length d : distortion limit
[]
GEM-SciDuet-train-81#paper-1211#slide-1
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-1
Overview of the proposed decoding algorithm
das muss unsere sorge gleichermaen sein I Process the source word from left-to-right I Maintain multiple tapes in the target side
das muss unsere sorge gleichermaen sein I Process the source word from left-to-right I Maintain multiple tapes in the target side
[]
GEM-SciDuet-train-81#paper-1211#slide-2
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-2
Phrase based decoding problem
das muss unsere sorge gleichermaen sein I Segment the German sentence into non-overlapping phrases this must our concern also be I Find an English translation for each German phrase this must also be our concern I Reorder the English phrases to get a better English sentence Derivation: complete translation with phrase mappings
das muss unsere sorge gleichermaen sein I Segment the German sentence into non-overlapping phrases this must our concern also be I Find an English translation for each German phrase this must also be our concern I Reorder the English phrases to get a better English sentence Derivation: complete translation with phrase mappings
[]
GEM-SciDuet-train-81#paper-1211#slide-3
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-3
Score a derivation
das muss unsere sorge gleichermaen sein score(<s> this must also be our concern </s>) I Phrase translation score: score(das muss, this must) + I Language model score:
das muss unsere sorge gleichermaen sein score(<s> this must also be our concern </s>) I Phrase translation score: score(das muss, this must) + I Language model score:
[]
GEM-SciDuet-train-81#paper-1211#slide-4
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-4
Fixed distortion limit distortion distance
das muss unsere sorge gleichermaen sein this must also be our concern
das muss unsere sorge gleichermaen sein this must also be our concern
[]
GEM-SciDuet-train-81#paper-1211#slide-5
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-5
Target side left to right the usual decoding algorithm
das muss unsere sorge gleichermaen sein unsere sorge das muss gleichermaen sein this must also be our concern
das muss unsere sorge gleichermaen sein unsere sorge das muss gleichermaen sein this must also be our concern
[]
GEM-SciDuet-train-81#paper-1211#slide-6
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-6
Target side left to right dynamic programming algorithm
das muss unsere sorge gleichermaen sein unsere sorge das muss gleichermaen sein this must also be our concern
das muss unsere sorge gleichermaen sein unsere sorge das muss gleichermaen sein this must also be our concern
[]
GEM-SciDuet-train-81#paper-1211#slide-7
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-7
Source side left to right the proposed algorithm
das muss unsere sorge gleichermaen sein this must our concern this must also our concern this must also be our concern
das muss unsere sorge gleichermaen sein this must our concern this must also our concern this must also be our concern
[]
GEM-SciDuet-train-81#paper-1211#slide-8
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-8
Source side left to right dynamic programming state
das muss unsere sorge gleichermaen sein this must our concern this must also our concern this must also be our concern
das muss unsere sorge gleichermaen sein this must our concern this must also our concern this must also be our concern
[]
GEM-SciDuet-train-81#paper-1211#slide-9
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-9
The number of DP states fixed distortion limit d
State: j . . . r} r : number of tapes I j n} n: source sentence length O(n) I s, t: source word indices I ws ,wt : translated target words j d j j Next phrase starts at Translated source words s, t can only occurs here I r is bounded by d
State: j . . . r} r : number of tapes I j n} n: source sentence length O(n) I s, t: source word indices I ws ,wt : translated target words j d j j Next phrase starts at Translated source words s, t can only occurs here I r is bounded by d
[]
GEM-SciDuet-train-81#paper-1211#slide-10
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-10
Extend a sub derivation by four operations
Consider a new phrase starting at source position j O(l) I New segment r+1 p I Append i i p I Prepend i p, i I Concatenate i i p, i das muss unsere sorge gleichermaen sein Sub-derivation: this must) our concern)(5, also this must our concern also this must also our concern Sub-derivation: this must)(5, also)(3, our concern)
Consider a new phrase starting at source position j O(l) I New segment r+1 p I Append i i p I Prepend i p, i I Concatenate i i p, i das muss unsere sorge gleichermaen sein Sub-derivation: this must) our concern)(5, also this must our concern also this must also our concern Sub-derivation: this must)(5, also)(3, our concern)
[]
GEM-SciDuet-train-81#paper-1211#slide-11
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-11
Bound on running time Ond lhd1
I n: source sentence length I d : distortion limit I l : bound on the number of phrases starting at any position I h: bound on the maximum number of target translations for
I n: source sentence length I d : distortion limit I l : bound on the number of phrases starting at any position I h: bound on the maximum number of target translations for
[]
GEM-SciDuet-train-81#paper-1211#slide-12
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-12
Summary
Problem: Phrase-based decoding with a fixed distortion limit I A new decoding algorithm with O(nd!lhd+1) time I Operate from left to right on the source side I Maintain multiple tapes on the target side
Problem: Phrase-based decoding with a fixed distortion limit I A new decoding algorithm with O(nd!lhd+1) time I Operate from left to right on the source side I Maintain multiple tapes on the target side
[]
GEM-SciDuet-train-81#paper-1211#slide-13
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-13
Follow up paper in EMNLP discussing experimental results
To appear in EMNLP 2017: Source-side left-to-right or target-side left-to-right? An empirical comparison of two phrase-based decoding algorithms I Beam search with a trigram language model I Constraints on the number of tapes I Achieve similar efficiency and accuracy as Moses
To appear in EMNLP 2017: Source-side left-to-right or target-side left-to-right? An empirical comparison of two phrase-based decoding algorithms I Beam search with a trigram language model I Constraints on the number of tapes I Achieve similar efficiency and accuracy as Moses
[]
GEM-SciDuet-train-81#paper-1211#slide-14
1211
A Polynomial-Time Dynamic Programming Algorithm for Phrase-Based Decoding with a Fixed Distortion Limit
Decoding of phrase-based translation models in the general case is known to be NPcomplete, by a reduction from the traveling salesman problem (Knight, 1999) . In practice, phrase-based systems often impose a hard distortion limit that limits the movement of phrases during translation. However, the impact on complexity after imposing such a constraint is not well studied. In this paper, we describe a dynamic programming algorithm for phrase-based decoding with a fixed distortion limit. The runtime of the algorithm is O(nd!lh d+1 ) where n is the sentence length, d is the distortion limit, l is a bound on the number of phrases starting at any position in the sentence, and h is related to the maximum number of target language translations for any source word. The algorithm makes use of a novel representation that gives a new perspective on decoding of phrase-based models. 59
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 415, 416, 417, 418, 419, 420, 421, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 445, 446, 447, 448, 449, 450, 451, 452, 453, 454, 455, 456, 457, 458, 459, 460, 461, 462, 463, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 496, 497, 498, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682 ], "paper_content_text": [ "Introduction Phrase-based translation models (Koehn et al., 2003; Och and Ney, 2004) are widely used in statistical machine translation.", "The decoding problem for phrase-based translation models is known to be difficult: the results from Knight (1999) imply that in the general case decoding of phrase-based translation models is NP-complete.", "The complexity of phrase-based decoding comes from reordering of phrases.", "In practice, however, various constraints on reordering are often imposed in phrase-based translation systems.", "A common constraint is a \"distortion limit\", which places a hard constraint on how far phrases can move.", "The complexity of decoding with such a distortion limit is an open question: the NP-hardness result from Knight * On leave from Columbia University.", "(1999) applies to a phrase-based model with no distortion limit.", "This paper describes an algorithm for phrasebased decoding with a fixed distortion limit whose runtime is linear in the length of the sentence, and for a fixed distortion limit is polynomial in other factors.", "More specifically, for a hard distortion limit d, and sentence length n, the runtime is O(nd!lh d+1 ), where l is a bound on the number of phrases starting at any point in the sentence, and h is related to the maximum number of translations for any word in the source language sentence.", "The algorithm builds on the insight that decoding with a hard distortion limit is related to the bandwidth-limited traveling salesman problem (BTSP) (Lawler et al., 1985) .", "The algorithm is easily amenable to beam search.", "It is quite different from previous methods for decoding of phrase-based models, potentially opening up a very different way of thinking about decoding algorithms for phrasebased models, or more generally for models in statistical NLP that involve reordering.", "2 Related Work Knight (1999) proves that decoding of word-to-word translation models is NP-complete, assuming that there is no hard limit on distortion, through a reduction from the traveling salesman problem.", "Phrasebased models are more general than word-to-word models, hence this result implies that phrase-based decoding with unlimited distortion is NP-complete.", "Phrase-based systems can make use of both reordering constraints, which give a hard \"distortion limit\" on how far phrases can move, and reordering models, which give scores for reordering steps, often penalizing phrases that move long distances.", "Moses (Koehn et al., 2007b ) makes use of a distortion limit, and a decoding algorithm that makes use of bit-strings representing which words have been translated.", "We show in Section 5.2 of this paper that this can lead to at least 2 n/4 bit-strings for an input sentence of length n, hence an exhaustive version of this algorithm has worst-case runtime that is exponential in the sentence length.", "The current paper is concerned with decoding phrase-based models with a hard distortion limit.", "Various other reordering constraints have been considered.", "Zens and Ney (2003) and Zens et al.", "(2004) consider two types of hard constraints: the IBM constraints, and the ITG (inversion transduction grammar) constraints from the model of Wu (1997) .", "They give polynomial time dynamic programming algorithms for both of these cases.", "It is important to note that the IBM and ITG constraints are different from the distortion limit constraint considered in the current paper.", "Decoding algorithms with ITG constraints are further studied by Feng et al.", "(2010) and Cherry et al.", "(2012) .", "Kumar and Byrne (2005) describe a class of reordering constraints and models that can be encoded in finite state transducers.", "Lopez (2009) shows that several translation models can be represented as weighted deduction problems and analyzes their complexities.", "1 Koehn et al.", "(2003) describe a beamsearch algorithm for phrase-based decoding that is in widespread use; see Section 5 for discussion.", "A number of reordering models have been proposed, see for example Tillmann (2004) , Koehn et al.", "(2007a) and Galley and Manning (2008) .", "DeNero and Klein (2008) consider the phrase alignment problem, that is, the problem of finding an optimal phrase-based alignment for a sourcelanguage/target-language sentence pair.", "They show that in the general case, the phrase alignment problem is NP-hard.", "It may be possible to extend the techniques in the current paper to the phrasealignment problem with a hard distortion limit.", "Various methods for exact decoding of phrasebased translation models have been proposed.", "Zaslavskiy et al.", "(2009) describe the use of travel-1 An earlier version of this paper states the complexity of decoding with a distortion limit as O(I 3 2 d ) where d is the distortion limit and I is the number of words in the sentence; however (personal communication from Adam Lopez) this runtime is an error, and should be O(2 I ) i.e., exponential time in the length of the sentence.", "A corrected version of the paper corrects this.", "ing salesman algorithms for phrase-based decoding.", "Chang and Collins (2011) describe an exact method based on Lagrangian relaxation.", "Aziz et al.", "(2014) describe a coarse-to-fine approach.", "These algorithms all have exponential time runtime (in the length of the sentence) in the worst case.", "Galley and Manning (2010) describe a decoding algorithm for phrase-based systems where phrases can have discontinuities in both the source and target languages.", "The algorithm has some similarities to the algorithm we propose: in particular, it makes use of a state representation that contains a list of disconnected phrases.", "However, the algorithms differ in several important ways: Galley and Manning (2010) make use of bit string coverage vectors, giving an exponential number of possible states; in contrast to our approach, the translations are not formed in strictly left-to-right ordering on the source side.", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs This section first defines the bandwidth-limited traveling salesman problem, then describes a polynomial time dynamic programming algorithm for the traveling salesman path problem on bandwidth limited graphs.", "This algorithm is the algorithm proposed by Lawler et al.", "(1985) 2 with small modifications to make the goal a path instead of a cycle, and to consider directed rather than undirected graphs.", "Bandwidth-Limited TSPPs The input to the problem is a directed graph G = (V, E), where V is a set of vertices and E is a set of directed edges.", "We assume that V = {1, 2, .", ".", ".", ", n}.", "A directed edge is a pair (i, j) where i, j ∈ V , and i = j.", "Each edge (i, j) ∈ E has an associated weight w i,j .", "Given an integer k ≥ 1, a graph is bandwidth-limited with bandwidth k if ∀(i, j) ∈ E, |i − j| ≤ k The traveling salesman path problem (TSPP) on the graph G is defined as follows.", "We will assume that vertex 1 is the \"source\" vertex and vertex n is the \"sink\" vertex.", "The TSPP is to find the minimum cost directed path from vertex 1 to vertex n, which passes through each vertex exactly once.", "An Algorithm for Bandwidth-Limited TSPPs The key idea of the dynamic-programming algorithm for TSPPs is the definition of equivalence classes corresponding to dynamic programming states, and an argument that the number of equivalence classes depends only on the bandwidth k. The input to our algorithm will be a directed graph G = (V, E), with weights w i,j , and with bandwidth k. We define a 1-n path to be any path from the source vertex 1 to the sink vertex n that visits each vertex in the graph exactly once.", "A 1-n path is a subgraph (V , E ) of G, where V = V and E ⊆ E. We will make use of the following definition: Definition 1.", "For any 1-n path H, define H j to be the subgraph that H induces on vertices 1, 2, .", ".", ".", "j, where 1 ≤ j ≤ n. That is, H j contains the vertices 1, 2, .", ".", ".", "j and the edges in H between these vertices.", "For a given value for j, we divide the vertices V into three sets A j , B j and C j : • A j = {1, 2, .", ".", ".", ", (j − k)} (A j is the empty set if j ≤ k).", "• B j = {1 .", ".", ".", "j} \\ A j .", "3 • C j = {j + 1, j + 2, .", ".", ".", ", n} (C j is the empty set if j = n).", "Note that the vertices in subgraph H j are the union of the sets A j and B j .", "A j is the empty set if j ≤ k, but B j is always non-empty.", "The following Lemma then applies: Lemma 1.", "For any 1-n path H in a graph with bandwidth k, for any 1 ≤ j ≤ n, the subgraph H j has the following properties: 1.", "If vertex 1 is in A j , then vertex 1 has degree one.", "For any vertex v ∈ A j with v ≥ 2, vertex v has degree two.", "3.", "H j contains no cycles.", "Proof.", "The first and second properties are true because of the bandwidth limit.", "Under the constraint of bandwidth k, any edge (u, v) in H such that u ∈ A j , must have v ∈ A j ∪ B j = H j .", "This fol- lows because if v ∈ C j = {j + 1, j + 2, .", ".", ".", "n} and u ∈ A j = {1, 2, .", ".", ".", "j − k}, then |u − v| > k. Similarly any edge (u, v) ∈ H such that v ∈ A j must have u ∈ A j ∪ B j = H j .", "It follows that for any vertex u ∈ A j , with u > 1, there are edges (u, v) ∈ H j and (v , u) ∈ H j , hence vertex u has degree 2.", "For vertex u ∈ A j with u = 1, there is an edge (u, v) ∈ H j , hence vertex u has degree 1.", "The third property (no cycles) is true because H j is a subgraph of H, which has no cycles.", "It follows that each connected component of H j is a directed path, that the start points of these paths are in the set {1} ∪ B j , and that the end points of these paths are in the set B j .", "We now define an equivalence relation on subgraphs.", "Two subgraphs H j and H j are in the same equivalence class if the following conditions hold (taken from Lawler et al.", "(1985) ): 1.", "For any vertex v ∈ B j , the degree of v in H j and H j is the same.", "For each path (connected component) in H j there is a path in H j with the same start and end points, and conversely.", "The significance of this definition is as follows.", "Assume that H * is an optimal 1-n path in the graph, and that it induces the subgraph H j on vertices 1 .", ".", ".", "j.", "Assume that H j is another subgraph over vertices 1 .", ".", ".", "j, which is in the same equivalence class as H j .", "For any subgraph H j , define c(H j ) to be the sum of edge weights in H j : c(H j ) = (u,v)∈H j w u,v Then it must be the case that c(H j ) ≥ c(H j ).", "Otherwise, we could simply replace H j by H j in H * , thereby deriving a new 1-n path with a lower cost, implying that H * is not optimal.", "This observation underlies the dynamic programming approach.", "Define σ to be a function that maps a subgraph H j to its equivalence class σ(H j ).", "The equivalence class σ(H j ) is a data structure that stores the degrees of the vertices in B j , together with the start and end points of each connected component in H j .", "Next, define ∆ to be a set of 0, 1 or 2 edges between vertex (j + 1) and the vertices in B j .", "For any subgraph H j+1 of a 1-n path, there is some ∆, simply found by recording the edges incident to vertex (j + 1).", "For any H j , define τ (σ(H j ), ∆) to be the equivalence class resulting from adding the edges in ∆ to the data structure σ(H j ).", "If adding the edges in ∆ to σ(H j ) results in an ill-formed subgraph-for example, a subgraph that has one or more cyclesthen τ (σ(H j ), ∆) is undefined.", "The following recurrence then defines the dynamic program (see Eq.", "20 of Lawler et al.", "(1985) ): α(j + 1, S) = min ∆,S :τ (S ,∆)=S α(j, S ) + c(∆) Here S is an equivalence class over vertices {1 .", ".", ".", "(j +1)}, and α(S, j +1) is the minimum score for any subgraph in equivalence class S. The min is taken over all equivalence classes S over vertices {1 .", ".", ".", "j}, together with all possible values for ∆.", "A Dynamic Programming Algorithm for Phrase-Based Decoding We now describe the dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "We first give basic definitions for phrasebased decoding, and then describe the algorithm.", "Basic Definitions Consider decoding an input sentence consisting of words x 1 .", ".", ".", "x n for some integer n. We assume that x 1 = <s> and x n = </s> where <s> and </s> are the sentence start and end symbols respectively.", "A phrase-based lexicon specifies a set of possible translations in the form of phrases p = (s, t, e), where s and t are integers such that 1 ≤ s ≤ t ≤ n, and e is a sequence of m ≥ 1 target-language words e 1 .", ".", ".", "e m .", "This signifies that words x s .", ".", ".", "x t in the source language have a translation as e 1 .", ".", ".", "e m in the target language.", "We use s(p), t(p) and e(p) to refer to the three components of a phrase p = (s, t, e), and e 1 (p) .", ".", ".", "e m (p) to refer to the words in the targetlanguage string e(p).", "We assume that (1, 1, <s>) and (n, n, </s>) are the only translation entries with s(p) ≤ 1 and t(p) ≥ n respectively.", "A derivation is then defined as follows: Definition 2 (Derivations).", "A derivation is a sequence of phrases p 1 .", ".", ".", "p L such that • p 1 = (1, 1, <s>) and p L = (n, n, </s>).", "• Each source word is translated exactly once.", "• The distortion limit is satisfied for each pair of phrases p i−1 , p i , that is: |t(p i−1 ) + 1 − s(p i )| ≤ d ∀ i = 2 .", ".", ".", "L. where d is an integer specifying the distortion limit in the model.", "Given a derivation p 1 .", ".", ".", "p L , a target-language translation can be obtained by concatenating the target-language strings e(p 1 ) .", ".", ".", "e(p L ).", "The scoring function is defined as follows: f (p 1 .", ".", ".", "p L ) = λ(e(p 1 ) .", ".", ".", "e(p L )) + L i=1 κ(p i ) + L i=2 η × |t(p i−1 ) + 1 − s(p i )| (1) For each phrase p, κ(p) is the translation score for the phrase.", "The parameter η is the distortion penalty, which is typically a negative constant.", "λ(e) is a language model score for the string e. We will assume a bigram language model: λ(e 1 .", ".", ".", "e m ) = m i=2 λ(e i |e i−1 ).", "The generalization of our algorithm to higher-order n-gram language models is straightforward.", "The goal of phrase-based decoding is to find y * = arg max y∈Y f (y) where Y is the set of valid derivations for the input sentence.", "Remark (gap constraint): Note that a common restriction used in phrase-based decoding (Koehn et al., 2003; Chang and Collins, 2011) , is to impose an additional \"gap constraint\" while decoding.", "See Chang and Collins (2011) for a description.", "In this case it is impossible to have a dynamicprogramming state where word x i has not been translated, and where word x i+k has been translated, for k > d. This limits distortions further, and it can be shown in this case that the number of possible bitstrings is O(2 d ) where d is the distortion limit.", "Without this constraint the algorithm of Koehn et al.", "(2003) actually fails to produce translations for many input sentences (Chang and Collins, 2011) .", "H 1 = π 1 = 1, 1, <s> H 3 = π 1 = 1, 1, <s> 2, 3, we must H 4 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also H 6 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms H 7 = π 1 , π 2 = 1, 1, <s> 2, 3, we must 4, 4, also , 5, 6, these criticisms 7, 7, seriously H 8 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously H 9 = π 1 = 1, 1, <s> 2, 3, we must 4, 4, also 8, 8, take 5, 6, these criticisms 7, 7, seriously 9, 9, </s> 3, 4, 6, 7, 8, 9} induced by the full derivation H = (1, 1, <s>)(2, 3, we must)(4, 4, also)(8, 8, take)(5, 6, these criticisms)(7, 7, seriously)(9, 9</s>) .", "Note that H j includes the phrases that cover spans ending before or at position j. Sub-derivation H j is extended to another subderivation H j+i by incorporating a phrase of length i.", "Figure 1: Sub-derivations H j for j ∈ {1, The Algorithm We now describe the dynamic programming algorithm.", "Intuitively the algorithm builds a derivation by processing the source-language sentence in strictly left-to-right order.", "This is in contrast with the algorithm of Koehn et al.", "(2007b) , where the targetlanguage sentence is constructed from left to right.", "Throughout this section we will use π, or π i for some integer i, to refer to a sequence of phrases: π = p 1 .", ".", ".", "p l where each phrase p i = (s(p i ), t(p i ), e(p i )), as de- fined in the previous section.", "We overload the s, t and e operators, so that if π = p 1 .", ".", ".", "p l , we have s(π) = s(p 1 ), t(π) = t(p l ), and e(π) = e(p 1 ) · e(p 2 ) .", ".", ".", "· e(p l ), where x · y is the concatenation of strings x and y.", "A derivation H consists of a single phrase sequence π = p 1 .", ".", ".", "p L : H = π = p 1 .", ".", ".", "p L where the sequence p 1 .", ".", ".", "p L satisfies the constraints in definition 2.", "We now give a definition of sub-derivations and complement sub-derivations: Definition 3 (Sub-derivations and Complement Sub- -derivations).", "For any H = p 1 .", ".", ".", "p L , for any j ∈ {1 .", ".", ".", "n} such that ∃ i ∈ {1 .", ".", ".", "L} s.t.", "t(p i ) = j, the sub-derivation H j and the complement sub- derivationH j are defined as H j = π 1 .", ".", ".", "π r ,H j = π 1 .", ".", ".π r where the following properties hold: • r is an integer with r ≥ 1.", "• Each π i for i = 1 .", ".", ".", "r is a sequence of one or more phrases, where each phrase p ∈ π i has t(p) ≤ j.", "• Eachπ i for i = 1 .", ".", ".", "(r − 1) is a sequence of one or more phrases, where each phrase p ∈π i has s(p) > j.", "•π r is a sequence of zero or more phrases, where each phrase p ∈π r has s(p) > j.", "We have zero phrases inπ r iff j = n where n is the length of the sentence.", "• Finally, π 1 ·π 1 · π 2 ·π 2 .", ".", ".", "π r ·π r = p 1 .", ".", ".", "p L where x · y denotes the concatenation of phrase sequences x and y.", "Note that for any j ∈ {1 .", ".", ".", "n} such that i ∈ {1 .", ".", ".", "L} such that t(p i ) = j, the sub-derivation H j and the complement sub-derivationH j is not defined.", "Thus for each integer j such that there is a phrase in H ending at point j, we can divide the phrases in H into two sets: phrases p with t(p) ≤ j, and phrases p with s(p) > j.", "The sub-derivation H j lists all maximal sub-sequences of phrases with t(p) ≤ j.", "The complement sub-derivationH j lists all maximal sub-sequences of phrases with s(p) > j.", "Figure 1 gives all sub-derivations H j for the derivation H = p 1 .", ".", ".", "p 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) (8, 8, take)(5, 6, these criticisms) (7, 7, seriously)(9, 9, </s>) As one example, the sub-derivation H 7 = π 1 , π 2 induced by H has two phrase sequences: π 1 = (1, 1, <s>)(2, 3, we must)(4, 4, also) π 2 = (5, 6, these criticisms)(7, 7, seriously) Note that the phrase sequences π 1 and π 2 give translations for all words x 1 .", ".", ".", "x 7 in the sentence.", "There 63 are two disjoint phrase sequences because in the full derivation H, the phrase p = (8, 8, take), with t(p) = 8 > 7, is used to form a longer sequence of phrases π 1 p π 2 .", "For the above example, the complement sub-derivationH 7 is as follows: π 1 = (8, 8, take) π 2 = (9, 9, </s>) It can be verified that π 1 ·π 1 ·π 2 ·π 2 = H as required by the definition of sub-derivations and complement sub-derivations.", "We now state the following Lemma: Lemma 2.", "For any derivation H = p 1 .", ".", ".", "p L , for any j such that ∃i such that t(p i ) = j, the subderivation H j = π 1 .", ".", ".", "π r satisfies the following properties: 1. s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "2.", "For all positions i ∈ {1 .", ".", ".", "j}, there exists a phrase p ∈ π, for some phrase sequence π ∈ H j , such that s(p) ≤ i ≤ t(p).", "For all i = 2 .", ".", ".", "r, s(π i ) ∈ {(j − d + 2) .", ".", ".", "j} 4.", "For all i = 1 .", ".", ".", "r, t(π i ) ∈ {(j − d) .", ".", ".", "j} Here d is again the distortion limit.", "This lemma is a close analogy of Lemma 1.", "The proof is as follows: Proof of Property 1: For all values of j, the phrase p 1 = (1, 1, <s>) has t(p 1 ) ≤ j, hence we must have π 1 = p 1 .", ".", ".", "p k for some k ∈ {1 .", ".", ".", "L}.", "It follows that s(π 1 ) = 1 and e 1 (π 1 ) = <s>.", "Proof of Property 2: For any position i ∈ {1 .", ".", ".", "j}, define the phrase (s, t, e) in the derivation H to be the phrase that covers word i; i.e., the phrase such that s ≤ i ≤ t. We must have s ∈ {1 .", ".", ".", "j}, because s ≤ i and i ≤ j.", "We must also have t ∈ {1 .", ".", ".", "j}, because otherwise we have s ≤ j < t, which contradicts the assumption that there is some i ∈ {1 .", ".", ".", "L} such that t(p i ) = j.", "It follows that the phrase (s, t, e) has t ≤ j, and from the definition of sub-derivations it follows that the phrase is in one of the phrase sequences π 1 .", ".", ".", "π r .", "Proof of Property 3: This follows from the distortion limit.", "Consider the complement sub-derivation H j = π 1 .", ".", ".π r .", "For the distortion limit to be satisfied, for all i ∈ {2 .", ".", ".", "r}, we must have |t(π i−1 ) + 1 − s(π i )| ≤ d We must also have t(π i−1 ) > j, and s(π i ) ≤ j, by the definition of sub-derivations.", "It follows that s(π i ) ∈ {(j − d + 2) .", ".", ".", "j}.", "Proof of Property 4: This follows from the distortion limit.", "First consider the case whereπ r is non-empty.", "For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "r}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j}.", "Next consider the case whereπ r is empty.", "In this case we must have j = n. For the distortion limit to be satisfied, for all i ∈ {1 .", ".", ".", "(r − 1)}, we must have |t(π i ) + 1 − s(π i )| ≤ d We must also have t(π i ) ≤ j, and s(π i ) > j, by the definition of sub-derivations.", "It follows that t(π i ) ∈ {(j − d) .", ".", ".", "j} for i ∈ {1 .", ".", ".", "(r − 1)}.", "For i = r, we must have t(π i ) = n, from which it again follows that t(π r ) = n ∈ {(j − d) .", ".", ".", "j}.", "We now define an equivalence relation between sub-derivations, which will be central to the dynamic programming algorithm.", "We define a function σ that maps a phrase sequence π to its signature.", "The signature is a four-tuple: σ(π) = (s, w s , t, w t ).", "where s is the start position, w s is the start word, t is the end position and w t is the end word of the phrase sequence.", "We will use s(σ), w s (σ), t(σ), and w t (σ) to refer to each component of a signature σ.", "For example, given a phrase sequence π = (1, 1, <s>) (2, 2, we) (4, 4, also) , its signature is σ(π) = (1, <s>, 4, also).", "The signature of a sub-derivation H j = π 1 .", ".", ".", "π r is defined to be σ(H j ) = σ(π 1 ) .", ".", ".", "σ(π r ) .", "For example, with H 7 as defined above, we have σ(H 7 ) = 1, <s>, 4, also , 5, these, 7, seriously Two partial derivations H j and H j are in the same equivalence class iff σ(H j ) = σ(H j ).", "We can now state the following Lemma: Lemma 3.", "Define H * to be the optimal derivation for some input sentence, and H * j to be a subderivation of H * .", "Suppose H j is another subderivation with j words, such that σ(H j ) = σ(H * j ).", "Then it must be the case that f (H * j ) ≥ f (H j ), where f is the function defined in Section 4.1.", "Proof.", "Define the sub-derivation and complement sub-derivation of H * as H * j = π 1 .", ".", ".", "π r H * j = π 1 .", ".", ".π r We then have f (H * ) = f (H * j ) + f (H * j ) + γ (2) where f (.", ".", ".)", "is as defined in Eq.", "1, and γ takes into account the bigram language modeling scores and the distortion scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 , etc.", "The proof is by contradiction.", "Define H j = π 1 .", ".", ".", "π r and assume that f (H * j ) < f (H j ).", "Now consider H = π 1π 1 π 2π 2 .", ".", ".", "π rπ r This is a valid derivation because the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 have the same distortion distances as π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , hence they must satisfy the distortion limit.", "We have f (H ) = f (H j ) + f (H * j ) + γ (3) where γ has the same value as in Eq.", "2.", "This follows because the scores for the transitions π 1 →π 1 , π 1 → π 2 , π 2 →π 2 are identical to the scores for the transitions π 1 →π 1 ,π 1 → π 2 , π 2 →π 2 , because σ(H * j ) = σ(H j ).", "It follows from Eq.", "2 and Eq.", "3 that if f (H j ) > f (H * j ), then f (H ) > f (H * ).", "But this contradicts the assumption that H * is optimal.", "It follows that we must have f (H j ) ≤ f (H * j ).", "This lemma leads to a dynamic programming algorithm.", "Each dynamic programming state consists of an integer j ∈ {1 .", ".", ".", "n} and a set of r signatures: T = (j, {σ 1 .", ".", ".", "σ r }) Figure 2 shows the dynamic programming algorithm.", "It relies on the following functions: Inputs: • An integer n specifying the length of the input sequence.", "• A function δ(T ) returning the set of valid transitions from state T .", "• A function τ (T, ∆) returning the state reached from state T by transition ∆ ∈ δ(T ).", "• A function valid(T ) returning TRUE if state T is valid, otherwise FALSE.", "• A function score(∆) that returns the score for any transition ∆.", "Initialization: {(1, <s>, 1, <s>) T 1 = (1, }) α(T 1 ) = 0 T 1 = {T 1 }, ∀j ∈ {2 .", ".", ".", "n}, T j = ∅ for j = 1, .", ".", ".", ", n − 1 for each state T ∈ T j for each ∆ ∈ δ(T ) T = τ (T, ∆) if valid(T ) = FALSE: continue score = α(T ) + score(∆) Define t to be the integer such that T = (t, {σ 1 .", ".", ".", "σr}) if T / ∈ Tt Tt = Tt ∪ {T } α(T ) = score bp(T ) = (∆) else if score > α(T ) α(T ) = score bp(T ) = (∆) Return: the score of the state (n, {(1, <s>, n, </s>)}) in Tn, and backpointers bp defining the transitions leading to this state.", "is the score for state T .", "The bp(T ) variables are backpointers used in recovering the highest scoring sequence of transitions.", "• For any state T , δ(T ) is the set of outgoing transitions from state T .", "• For any state T , for any transition ∆ ∈ δ(T ), τ (T, ∆) is the state reached by transition ∆ from state T .", "• For any state T , valid(T ) checks if a resulting state is valid.", "• For any transition ∆, score(∆) is the score for the transition.", "We next give full definitions of these functions.", "Definitions of δ(T ) and τ (T, ∆) Recall that for any state T , δ(T ) returns the set of possible transitions from state T .", "In addition τ (T, ∆) returns the state reached when taking transition ∆ ∈ δ(T ).", "Given the state T = (j, {σ 1 .", ".", ".", "σ r }), each transition is of the form ψ 1 p ψ 2 where ψ 1 , p and ψ 2 are defined as follows: • p is a phrase such that s(p) = j + 1.", "• ψ 1 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 1 = φ, it must be the case that |t(ψ 1 ) + 1 − s(p)| ≤ d and t(ψ 1 ) = n. • ψ 2 ∈ {σ 1 .", ".", ".", "σ r } ∪ {φ}.", "If ψ 2 = φ, it must be the case that |t(p) + 1 − s(ψ 2 )| ≤ d and s(ψ 2 ) = 1.", "• If ψ 1 = φ and ψ 2 = φ, then ψ 1 = ψ 2 .", "Thus there are four possible types of transition from a state T = (j, {σ 1 .", ".", ".", "σ r }): Case 1: ∆ = φ p φ.", "In this case the phrase p is incorporated as a stand-alone phrase.", "The new state T is equal to (j , {σ 1 .", ".", ".", "σ r+1 }) where j = t(p), where σ i = σ i for i = 1 .", ".", ".", "r, and σ r+1 = (s(p), e 1 (p), t(p), e m (p)).", "Case 2: ∆ = σ i p φ for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is appended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(σ i ), w s (σ i ), t(p), e m (p)), and where σ i = σ i for all i = i.", "Case 3: ∆ = φ p σ i for some σ i ∈ {σ 1 .", ".", ".", "σ r }.", "In this case the phrase p is prepended to the signa- ture σ i .", "The new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r ), where j = t(p), where σ i is replaced by (s(p), e 1 (p), t(σ i ), w t (σ i )), and where σ i = σ i for all i = i.", "Case 4: ∆ = σ i p σ i for some σ i , σ i ∈ {σ 1 .", ".", ".", "σ r }, with i = i.", "In this case phrase p is appended to signature σ i , and prepended to signature σ i , effectively joining the two signatures together.", "In this case the new state T = τ (T, ∆) is of the form (j , σ 1 .", ".", ".", "σ r−1 ), where signatures σ i and σ i are replaced by a new signature (s(σ i ), w s (σ i ), t(σ i ), w t (σ i )), and all other signatures are copied across from T to T .", "Figure 3 gives the dynamic programming states and transitions for the derivation H in Figure 1 .", "For example, the sub-derivation H 7 = (1, 1, <s>)(2, 3, we must)(4, 4, also) , (5, 6, these criticisms)(7, 7, seriously) will be mapped to a state T = 7, σ(H 7 ) = 7, (1, <s>, 4, also), (5, these, 7, seriously) 1, σ 1 = 1, <s>, 1, <s> 3, σ 1 = 1, <s>, 3, must 4, σ 1 = 1, <s>, 4, also 6, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 6, criticisms 7, σ 1 = 1, <s>, 4, also , σ 2 = 5, these, 7, seriously 8, σ 1 = 1, <s>, 7, seriously 9, σ 1 = 1, <s>, 9, </s> σ 1 (2, 3, we must) φ σ 1 (4, 4, also) φ φ (5, 6, these criticisms) φ σ 2 (7, 7, seriously) φ σ 1 (8, 8, take) σ 2 σ 1 (9, 9, </s>) φ Figure 3 : Dynamic programming states and the transitions from one state to another, using the same example as in Figure 1 .", "Note that σ i = σ(π i ) for all π i ∈ H j .", "The transition σ 1 (8, 8, take) σ 2 from this state leads to a new state, T = 8, σ 1 = (1, <s>, 7, seriously) 4.3 Definition of score(∆) Figure 4 gives the definition of score(∆), which incorporates the language model, phrase scores, and distortion penalty implied by the transition ∆.", "Figure 5 gives the definition of valid(T ).", "This function checks that the start and end points of each signature are in the set of allowed start and end points given in Lemma 2.", "Definition of valid(T ) A Bound on the Runtime of the Algorithm We now give a bound on the algorithm's run time.", "This will be the product of terms N and M , where N is an upper bound on the number of states in the dynamic program, and M is an upper bound on the number of outgoing transitions from any state.", "For any j ∈ {1 .", ".", ".", "n}, define first(j) to be the set of target-language words that can begin at position j and last(j) to be the set of target-language ∆ Resulting phrase sequence score(∆) φ p φ (s, e 1 , t, em)ŵ(p) σ i p φ (s(σ i ), ws(σ i ), t, em)ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| φ p σ i (s, e 1 , t(σ i ), wt(σ i ))ŵ(p) + λ(ws(σ i )|em) Figure 4 : Four operations that can extend a state T = (j, {σ 1 .", ".", ".", "σ r }) by a phrase p = (s, t, e 1 .", ".", ".", "e m ), and the scores incurred.", "We defineŵ(p) = κ(p) + m i=2 λ(e i (p)|e i−1 (p)).", "The functionŵ(p) includes the phrase translation model κ and the language model scores that can be computed using p alone.", "The weight η is the distortion penalty.", "+ η × |t + 1 − s(σ i )| σ i p σ i (s(σ i ), ws(σ i ), t(σ i ), wt(σ i ))ŵ(p) + λ(e 1 |wt(σ i )) + η × |t(σ i ) + 1 − s| +λ(ws(σ i )|em) + η × |t + 1 − s(σ i )| Function valid(T ) Input: In addition, define singles(j) to be the set of phrases that translate the single word at position j: singles(j) = {p : s(p) = j and t(p) = j} Next, define h to be the smallest integer such that for all j, |first(j)| ≤ h, |last(j)| ≤ h, and |singles(j)| ≤ h. Thus h is a measure of the maximal ambiguity of any word x j in the input.", "State T = j, {σ 1 .", ".", ".", "σr} for i = 1 .", ".", ".", "r if s(σ i ) < j − d + 2 and s(σ i ) = 1 return FALSE if t(σ i ) < j − d return FALSE return TRUE Finally, for any position j, define start(j) to be the set of phrases starting at position j: start(j) = {p : s(p) = j} and define l to be the smallest integer such that for all j, |start(j)| ≤ l. Given these definitions we can state the following result: Theorem 1.", "The time complexity of the algorithm is O(nd!lh d+1 ).", "To prove this we need the following definition: Definition 4 (p-structures).", "For any finite set A of integers with |A| = k, a p-structure is a set of r ordered pairs {(s i , t i )} r i=1 that satisfies the following properties: 1) 0 ≤ r ≤ k; 2) for each i ∈ {1 .", ".", ".", "r}, s i ∈ A and t i ∈ A (both s i = t i and s i = t i are allowed); 3) for each j ∈ A, there is at most one index i ∈ {1 .", ".", ".", "r} such that (s i = j) or (t i = j) or (s i = j and t i = j).", "We use g(k) to denote the number of unique pstructures for a set A with |A| = k. We then have the following Lemmas: Lemma 4.", "The function g(k) satisfies g(0) = 0, g(1) = 2, and the following recurrence for k ≥ 2: g(k) = 2g(k − 1) + 2(n − 1)g(k − 2) Proof.", "The proof is in Appendix A. Lemma 5.", "Consider the function h(k) = k 2 × g(k).", "h(k) is in O((k − 2)!).", "Proof.", "The proof is in Appendix B.", "We can now prove the theorem: Proof of Theorem 1: First consider the number of states in the dynamic program.", "Each state is of the form (j, {σ 1 .", ".", ".", "σ r }) where the set {(s(σ i ), t(σ i ))} r i=1 is a p-structure over the set {1}∪ {(j − d) .", ".", ".", "d}.", "The number of possible values for {(s(σ i ), e(σ i ))} r i=1 is at most g(d + 2).", "For a fixed choice of {(s(σ i ), t(σ i ))} r i=1 we will argue that there are at most h d+1 possible values for {(w s (σ i ), w t (σ i ))} r i=1 .", "This follows because for each k ∈ {(j − d) .", ".", ".", "j} there are at most h possible choices: if there is some i such that s(σ i ) = k, and t(σ i ) = k, then the associated word w s (σ i ) is in the set first(k); alternatively if there is some i such that t(σ i ) = k, and s(σ i ) = k, then the associated word w t (σ i ) is in the set last(k); alternatively if there is some i such that s(σ i ) = t(σ i ) = k then the associated words w s (σ i ), w t (σ i ) must be the first/last word of some phrase in singles(k); alternatively there is no i such that s(σ i ) = k or t(σ i ) = k, in which case there is no choice associated with position k in the sentence.", "Hence there are at most h choices associated with each position k ∈ {(j − d) .", ".", ".", "j}, giving h d+1 choices in total.", "Combining these results, and noting that there are n choices of the variable j, implies that there are at most ng(d + 2)h d+1 states in the dynamic program.", "Now consider the number of transitions from any state.", "A transition is of the form ψ 1 pψ 2 as defined in Section 4.2.1.", "For a given state there are at most (d + 2) choices for ψ 1 and ψ 2 , and l choices for p, giving at most (d + 2) 2 l choices in total.", "Multiplying the upper bounds on the number of states and number of transitions for each state gives an upper bound on the runtime of the algorithm as O(ng(d + 2)h d+1 (d + 2) 2 l).", "Hence by Lemma 5 the runtime is O(nd!lh d+1 ) time.", "The bound g(d + 2) over the number of possible values for {(s(σ i ), e(σ i ))} r i=1 is somewhat loose, as the set of p-structures over {1} ∪ {(j − d) .", ".", ".", "d} in- cludes impossible values {(s i , t i )} r i=1 where for example there is no i such that s(σ i ) = 1.", "However the bound is tight enough to give the O(d!)", "runtime.", "Discussion We conclude the paper with discussion of some issues.", "First we describe how the dynamic programming structures we have described can be used in conjunction with beam search.", "Second, we give more analysis of the complexity of the widely-used decoding algorithm of Koehn et al.", "(2003) .", "Beam Search Beam search is widely used in phrase-based decoding; it can also be applied to our dynamic programming construction.", "We can replace the line for each state T ∈ T j in the algorithm in Figure 2 with for each state T ∈ beam(T j ) where beam is a function that returns a subset of T j , most often the highest scoring elements of T j under some scoring criterion.", "A key question concerns the choice of scoring function γ(T ) used to rank states.", "One proposal is to define γ(T ) = α(T ) + β(T ) where α(T ) is the score used in the dynamic program, and β(T ) = i:ws(σ i ) =<s> λ u (w s (σ i )).", "Here λ u (w) is the score of word w under a unigram language model.", "The β(T ) scores allow different states in T j , which have different words w s (σ i ) at the start of signatures, to be comparable: for example it compensates for the case where w s (σ i ) is a rare word, which will incur a low probability when the bigram w w s (σ i ) for some word w is constructed during search.", "The β(T ) values play a similar role to \"future scores\" in the algorithm of Koehn et al.", "(2003) .", "However in the Koehn et al.", "(2003) algorithm, different items in the same beam can translate different subsets of the input sentence, making futurescore estimation more involved.", "In our case all items in T j translate all words x 1 .", ".", ".", "x j inclusive, which may make comparison of different hypotheses more straightforward.", "Complexity of Decoding with Bit-string Representations A common method for decoding phrase-based models, as described in Koehn et al.", "(2003) , is to use beam search in conjunction with a search algorithm that 1) creates the target language string in strictly left-to-right order; 2) uses a bit string with bits b i ∈ {0, 1} for i = 1 .", ".", ".", "n representing at each point whether word i in the input has been translated.", "A natural question is whether the number of possible bit strings for a model with a fixed distortion limit d can grow exponentially quickly with respect to the length of the input sentence.", "This section gives an example that shows that this is indeed the case.", "Assume that our sentence length n is such that (n − 2)/4 is an integer.", "Assume as before x 1 = <s> and x n = </s>.", "For each k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, assume we have the following phrases for the words x 4k+2 .", ".", ".", "x 4k+5 : (4k + 2, 4k + 2, u k ) (4k + 3, 4k + 3, v k ) (4k + 4, 4k + 4, w k ) (4k + 5, 4k + 5, z k ) (4k + 4, 4k + 5, y k ) Note that the only source of ambiguity is for each k whether we use y k to translate the entire phrase x 4k+4 x 4k+5 , or whether we use w k and z k to translate x 4k+4 and x 4k+5 separately.", "With a distortion limit d ≥ 5, the number of possible bit strings in this example is at least 2 (n−2)/4 .", "This follows because for any setting of the variables b 4k+4 ∈ {0, 1} for k ∈ {0 .", ".", ".", "((n − 2)/4 − 1)}, there is a valid derivation p 1 .", ".", ".", "p L such that the prefix p 1 .", ".", ".", "p l where l = 1 + (n − 2)/4 gives this bit string.", "Simply choose p 1 = (1, 1, <s>) and for l ∈ {0 .", ".", ".", "(n − 2)/4 − 1} choose p l +2 = (4l + 4, 4l + 5, y i ) if b 4k+4 = 1, p l +2 = (4l + 5, 4l + 5, z i ) otherwise.", "It can be verified that p 1 .", ".", ".", "p l is a valid prefix (there is a valid way to give a complete derivation from this prefix).", "As one example, for n = 10, and b 4 = 1 and b 8 = 0, a valid derivation is (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 )(7, 7, v 2 )(3, 3, v 1 ) (2, 2, u 1 )(6, 6, u 2 )(8, 8, w 2 )(10, 10, </s>) In this case the prefix (1, 1, <s>)(4, 5, y 1 )(9, 9, z 2 ) gives b 4 = 1 and b 8 = 0.", "Other values for b 4 and b 8 can be given by using (5, 5, z 1 ) in place of (4, 5, y 1 ), and (8, 9, y 2 ) in place of (9, 9, z 2 ), with the following phrases modified appropriately.", "Conclusion We have given a polynomial-time dynamic programming algorithm for phrase-based decoding with a fixed distortion limit.", "The algorithm uses a quite different representation of states from previous decoding algorithms, is easily amenable to beam search, and leads to a new perspective on phrase-based decoding.", "Future work should investigate the effectiveness of the algorithm in practice.", "A Proof of Lemma 4 Without loss of generality assume A = {1, 2, 3, .", ".", ".", "k}.", "We have g(1) = 2, because in this case the valid p-structures are {(1, 1)} and ∅.", "To calculate g(k) we can sum over four possibilities: Case 1: There are g(k − 1) p-structures with s i = t i = 1 for some i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = t i = 1 for some i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 2: There are g(k − 1) p-structures such that s i = 1 and t i = 1 for all i ∈ {1 .", ".", ".", "r}.", "This follows because once s i = 1 and t i = 1 for all i, there are g(k − 1) possible p-structures for the integers {2, 3, 4 .", ".", ".", "k}.", "Case 3: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with s i = 1 and t i = 1.", "This follows because for the i such that s i = 1, there are (k − 1) choices for the value for t i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, t i }.", "Case 4: There are (k − 1) × g(k − 2) p-structures such that there is some i ∈ {1 .", ".", ".", "r} with t i = 1 and s i = 1.", "This follows because for the i such that t i = 1, there are (k − 1) choices for the value for s i , and there are then g(k − 2) possible p-structures for the remaining integers in the set {1 .", ".", ".", "k}/{1, s i }.", "Summing over these possibilities gives the following recurrence: g(k) = 2g(k − 1) + 2(k − 1) × g(k − 2) B Proof of Lemma 5 Recall that h(k) = f (k) × g(k) where f (k) = k 2 .", "Define k 0 to be the smallest integer such that for all k ≥ k 0 , 2f (k) f (k − 1) + 2f (k) f (k − 2) · k − 1 k − 3 ≤ k − 2 (4) For f (k) = k 2 we have k 0 = 9.", "Now choose a constant c such that for all k ∈ {1 .", ".", ".", "(k 0 − 1)}, h(k) ≤ c × (k − 2)!.", "We will prove by induction that under these definitions of k 0 and c we have h(k) ≤ c(k − 2)!", "for all integers k, hence h(k) is in O((k − 2)!).", "For values k ≥ k 0 , we have h(k) = f (k)g(k) = 2f (k)g(k − 1) + 2f (k)(k − 1)g(k − 2) (5) = 2f (k) f (k − 1) h(k − 1) + 2f (k) f (k − 2) (k − 1)h(k − 2) ≤ 2cf (k) f (k − 1) + 2cf (k) f (k − 2) · k − 1 k − 3 (k − 3)!", "(6) ≤ c(k − 2)!", "(7) Eq.", "5 follows from g(k) = 2g(k−1)+2(k−1)g(k− 2).", "Eq.", "6 follows by the inductive hypothesis that h(k − 1) ≤ c(k − 3)!", "and h(k − 2) ≤ c(k − 4)!.", "Eq 7 follows because Eq.", "4 holds for all k ≥ k 0 ." ] }
{ "paper_header_number": [ "1", "3", "3.1", "3.2", "2.", "2.", "4", "4.1", "4.2", "3.", "4.2.1", "4.5", "5", "5.1", "5.2", "6" ], "paper_header_content": [ "Introduction", "Background: The Traveling Salesman Problem on Bandwidth-Limited Graphs", "Bandwidth-Limited TSPPs", "An Algorithm for Bandwidth-Limited TSPPs", "For any vertex", "For each path (connected component) in H j", "A Dynamic Programming Algorithm for", "Basic Definitions", "The Algorithm", "For all", "Definitions of δ(T ) and τ (T, ∆)", "A Bound on the Runtime of the Algorithm", "Discussion", "Beam Search", "Complexity of Decoding with Bit-string Representations", "Conclusion" ] }
GEM-SciDuet-train-81#paper-1211#slide-14
Future work
Finite state transducer (FST) formulation I An NMT system using this kind of approach? I Replace the attention model by absolving source words strictly
Finite state transducer (FST) formulation I An NMT system using this kind of approach? I Replace the attention model by absolving source words strictly
[]
GEM-SciDuet-train-82#paper-1212#slide-1
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-1
Speech Model
Project to the joint semantic space Attention: weighted sum of last RHN layer units RHN RHN: Recurrent Highway Grounded speech perception MFCC
Project to the joint semantic space Attention: weighted sum of last RHN layer units RHN RHN: Recurrent Highway Grounded speech perception MFCC
[]
GEM-SciDuet-train-82#paper-1212#slide-2
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-2
Chrupala et al ACL2017
Representation of language in a model of visually grounded speech signal Using hidden layer activations in a set of auxiliary tasks Predicting utterance length and content, measuring representational similarity and disambiguation of homonyms Encodings of form and meaning emerge and evolve in hidden layers of stacked RNNs processing grounded speech
Representation of language in a model of visually grounded speech signal Using hidden layer activations in a set of auxiliary tasks Predicting utterance length and content, measuring representational similarity and disambiguation of homonyms Encodings of form and meaning emerge and evolve in hidden layers of stacked RNNs processing grounded speech
[]
GEM-SciDuet-train-82#paper-1212#slide-3
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-3
Current Study
Questions: how is phonology encoded in MFCC features extracted from speech signal? activations of the layers of the model? Data: Synthetically Spoken COCO dataset Phoneme decoding and clustering
Questions: how is phonology encoded in MFCC features extracted from speech signal? activations of the layers of the model? Data: Synthetically Spoken COCO dataset Phoneme decoding and clustering
[]
GEM-SciDuet-train-82#paper-1212#slide-4
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-4
Phoneme Decoding
Identifying phonemes from speech signal/activation patterns: supervised classification of aligned phonemes Speech signal was aligned with phonemic transcription using Gentle toolkit (based on Kaldi, Povey et al., 2011) CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. MFCC features and activations) are stored in a are the num- ered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter. This sug- ber of times steps and the dimensionality, respec- tively, for each representation r. Given the align- ment of each phoneme token to the underlying au- dio, we then infer the slice of the representation gests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree. matrix corresponding to it. In this section we report on four experiments which we designed to elucidate to what extent in- formation about phonology is represented in the activations of the layers of the COCO Speech model. In Section we quantify how easy it is to decode phoneme identity from activations. In Section we determine phoneme discriminabil- ity in a controlled task with minimal pair stimuli. Section shows how the phoneme inventory is organized in the activation space of the model. Fi- nally, in Section we tackle the general issue of the representation of phonological form versus MFCC Conv Rec1 Rec2 Rec3 Rec4 Rec5 Representation meaning with the controlled task of synonym dis- crimination.
Identifying phonemes from speech signal/activation patterns: supervised classification of aligned phonemes Speech signal was aligned with phonemic transcription using Gentle toolkit (based on Kaldi, Povey et al., 2011) CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. MFCC features and activations) are stored in a are the num- ered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter. This sug- ber of times steps and the dimensionality, respec- tively, for each representation r. Given the align- ment of each phoneme token to the underlying au- dio, we then infer the slice of the representation gests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree. matrix corresponding to it. In this section we report on four experiments which we designed to elucidate to what extent in- formation about phonology is represented in the activations of the layers of the COCO Speech model. In Section we quantify how easy it is to decode phoneme identity from activations. In Section we determine phoneme discriminabil- ity in a controlled task with minimal pair stimuli. Section shows how the phoneme inventory is organized in the activation space of the model. Fi- nally, in Section we tackle the general issue of the representation of phonological form versus MFCC Conv Rec1 Rec2 Rec3 Rec4 Rec5 Representation meaning with the controlled task of synonym dis- crimination.
[]
GEM-SciDuet-train-82#paper-1212#slide-5
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-5
Phoneme Discrimination
ABX task (Schatz et al., 2013): discriminate minimal pairs; is X closer to A or to B? A, B and X are CV syllables (A,B) and (B,X) are minimum pairs, but (A,X) are not CoNLL 2017 Submission ***. Confidential Review Cop Table 3: Accuracy of choosing the correct target in an ABX task using different representations.
ABX task (Schatz et al., 2013): discriminate minimal pairs; is X closer to A or to B? A, B and X are CV syllables (A,B) and (B,X) are minimum pairs, but (A,X) are not CoNLL 2017 Submission ***. Confidential Review Cop Table 3: Accuracy of choosing the correct target in an ABX task using different representations.
[]
GEM-SciDuet-train-82#paper-1212#slide-6
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-6
Phoneme Discrimination by Class
The task is most challenging when the target (B) and distractor (A) belong to the same phoneme class CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. The first layer Convs,d,z is a one-dimensional con- volution of size s which subsamples the input with stride z, and projects it to d dimensions. It is fol- lowed by RHNk,L which consists of k residual- ized recurrent layers. Specifically these are Recur- rent Highway Network layers (Zilly et al., which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the re- currence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps. Finally, both ut- terance and image projections are L2-normalized. See Section for details of the model configura- tion. Table 2: CO tecture. Experimental data and setup The phoneme activations in each layer are calcu- are as follows size 64, strid layers with 5 attention Mul den units, A trained on Im and are avera of each imag lated as the activations averaged over the duration of the phoneme token in the input. The average in- parameters is architecture o put vectors are similarly model. vectors averaged over the time course of the ar- ticulation of the phoneme token. When we need Synthet The task is most challenging when the target (B) and to represent a phoneme type we do so by averag- distractor (A) belong to the same phoneme class ing the vectors of all its instances in the valida- tion set. Table shows the phoneme inventory we Vowels i I U u e E @ A OI O o The Speech C thetically Spo dataset (Lin e thesized for th high-quality s aI 2 A aU j o l w Forced m n N p b t d k g f v T D s z S Z h Table Phonemes of General American English. We aligned th phonemic tra which in tur glish to transc f inds the opti work with; this is also the inventory used by Gen- Table 3: Accuracy of choosing the correct target in an ABX task using different representations. context invariance in phoneme discrimination by evaluating how often the model recognises X as mfcc conv rec1 rec2 rec3 rec4 rec5 Representation the syllable closer to B than to A. Class affricate approximant fricative nasal plosive vowel We used a list of all attested consonant-vowel
The task is most challenging when the target (B) and distractor (A) belong to the same phoneme class CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. The first layer Convs,d,z is a one-dimensional con- volution of size s which subsamples the input with stride z, and projects it to d dimensions. It is fol- lowed by RHNk,L which consists of k residual- ized recurrent layers. Specifically these are Recur- rent Highway Network layers (Zilly et al., which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the re- currence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps. Finally, both ut- terance and image projections are L2-normalized. See Section for details of the model configura- tion. Table 2: CO tecture. Experimental data and setup The phoneme activations in each layer are calcu- are as follows size 64, strid layers with 5 attention Mul den units, A trained on Im and are avera of each imag lated as the activations averaged over the duration of the phoneme token in the input. The average in- parameters is architecture o put vectors are similarly model. vectors averaged over the time course of the ar- ticulation of the phoneme token. When we need Synthet The task is most challenging when the target (B) and to represent a phoneme type we do so by averag- distractor (A) belong to the same phoneme class ing the vectors of all its instances in the valida- tion set. Table shows the phoneme inventory we Vowels i I U u e E @ A OI O o The Speech C thetically Spo dataset (Lin e thesized for th high-quality s aI 2 A aU j o l w Forced m n N p b t d k g f v T D s z S Z h Table Phonemes of General American English. We aligned th phonemic tra which in tur glish to transc f inds the opti work with; this is also the inventory used by Gen- Table 3: Accuracy of choosing the correct target in an ABX task using different representations. context invariance in phoneme discrimination by evaluating how often the model recognises X as mfcc conv rec1 rec2 rec3 rec4 rec5 Representation the syllable closer to B than to A. Class affricate approximant fricative nasal plosive vowel We used a list of all attested consonant-vowel
[]
GEM-SciDuet-train-82#paper-1212#slide-7
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-7
Organization of Phonemes
CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Agglomerative hierarchical clustering of phoneme activation vectors from the first hidden layer:
CoNLL 2017 Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Agglomerative hierarchical clustering of phoneme activation vectors from the first hidden layer:
[]
GEM-SciDuet-train-82#paper-1212#slide-8
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-8
Synonym Discrimination
Distinguishing between synonym pairs in the same context: A girl looking at a photo A girl looking at a picture Synonyms were selected using WordNet synsets: The pair have the same POS tag and are interchangeable The pair clearly differ in form (not donut/doughnut) The more frequent token in a pair constitutes less than 95% of the occurrences. CoNLL Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Figure 4: Hierarchical clustering of phoneme activation vectors on the first hidden layer in a multilayer recurrent neural network tr grounded speech signal. We believe it i tant to carry out multiple analyses usin mfcc conv rec1 rec2 rec3 rec4 rec5 emb methodology: Representation any single experiment ma Pair leading as it depends on analytical choice the type of supervised model used for d cut.slice sidewalk.pavement make.prepare rock.stone the algorithm used for clustering, or the s someone.person store.shop photo.picture purse.bag metric for representational similarity ana picture.image assortment.variety kid.child spot.place photograph.picture pier.dock the extent that more than one experiment slice.piece direction.way the same conclusion our confidence in the bicycle.bike carpet.rug photograph.photo ity of the insights gained will be increase bun.roll couch.sofa large.big tv.television vegetable.veggie The main high-level result of our st small.little mfcc conv rec1 rec2 rec3 rec4 rec5 emb Representation Pair cut.slice make.prepare someone.person photo.picture picture.image kid.child photograph.picture slice.piece f irms earlier work: encoding of sema comes stronger in higher layer, while enc rock.stone store.shop Figure Synonym form discrimination becomes weaker. This error general rates patt purse.bag assortment.variety be expected as the objective of the utter spot.place representation and synonym pair. pier.dock direction.way coder is to transform the input acoustic
Distinguishing between synonym pairs in the same context: A girl looking at a photo A girl looking at a picture Synonyms were selected using WordNet synsets: The pair have the same POS tag and are interchangeable The pair clearly differ in form (not donut/doughnut) The more frequent token in a pair constitutes less than 95% of the occurrences. CoNLL Submission ***. Confidential Review Copy. DO NOT DISTRIBUTE. Figure 4: Hierarchical clustering of phoneme activation vectors on the first hidden layer in a multilayer recurrent neural network tr grounded speech signal. We believe it i tant to carry out multiple analyses usin mfcc conv rec1 rec2 rec3 rec4 rec5 emb methodology: Representation any single experiment ma Pair leading as it depends on analytical choice the type of supervised model used for d cut.slice sidewalk.pavement make.prepare rock.stone the algorithm used for clustering, or the s someone.person store.shop photo.picture purse.bag metric for representational similarity ana picture.image assortment.variety kid.child spot.place photograph.picture pier.dock the extent that more than one experiment slice.piece direction.way the same conclusion our confidence in the bicycle.bike carpet.rug photograph.photo ity of the insights gained will be increase bun.roll couch.sofa large.big tv.television vegetable.veggie The main high-level result of our st small.little mfcc conv rec1 rec2 rec3 rec4 rec5 emb Representation Pair cut.slice make.prepare someone.person photo.picture picture.image kid.child photograph.picture slice.piece f irms earlier work: encoding of sema comes stronger in higher layer, while enc rock.stone store.shop Figure Synonym form discrimination becomes weaker. This error general rates patt purse.bag assortment.variety be expected as the objective of the utter spot.place representation and synonym pair. pier.dock direction.way coder is to transform the input acoustic
[]
GEM-SciDuet-train-82#paper-1212#slide-9
1212
Encoding of phonology in a recurrent neural model of grounded speech
We study the representation and encoding of phonemes in a recurrent neural network model of grounded speech. We use a model which processes images and their spoken descriptions, and projects the visual and auditory representations into the same semantic space. We perform a number of analyses on how information about individual phonemes is encoded in the MFCC features extracted from the speech signal, and the activations of the layers of the model. Via experiments with phoneme decoding and phoneme discrimination we show that phoneme representations are most salient in the lower layers of the model, where low-level signals are processed at a fine-grained level, although a large amount of phonological information is retain at the top recurrent layer. We further find out that the attention mechanism following the top recurrent layer significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy. Moreover, a hierarchical clustering of phoneme representations learned by the network shows an organizational structure of phonemes similar to those proposed in linguistics.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198 ], "paper_content_text": [ "Introduction Spoken language is a universal human means of communication.", "As such, its acquisition and representation in the brain is an essential topic in the study of the cognition of our species.", "In the field of neuroscience there has been a long-standing interest in the understanding of neural representations of linguistic input in human brains, most commonly via the analysis of neuro-imaging data of participants exposed to simplified, highly controlled inputs.", "More recently, naturalistic data has been used and patterns in the brain have been correlated with patterns in the input (e.g.", "Wehbe et al., 2014; Khalighinejad et al., 2017) .", "This type of approach is relevant also when the goal is the understanding of the dynamics in complex neural network models of speech understanding.", "Firstly because similar techniques are often applicable, but more importantly because the knowledge of how the workings of artificial and biological neural networks are similar or different is valuable for the general enterprise of cognitive science.", "Recent studies have implemented models which learn to understand speech in a weakly and indirectly supervised fashion from correlated audio and visual signal: Harwath et al.", "(2016) ; Harwath and Glass (2017); Chrupała et al.", "(2017a) .", "This is a departure from typical Automatic Speech Recognition (ASR) systems which rely on large amounts of transcribed speech, and these recent models come closer to the way humans acquire language in a grounded setting.", "It is thus especially interesting to investigate to what extent the traditional levels of linguistic analysis such as phonology, morphology, syntax and semantics are encoded in the activations of the hidden layers of these models.", "There are a small number of studies which focus on the syntax and/or semantics in the context of neural models of written language (e.g.", "Elman, 1991; Frank et al., 2013; Kádár et al., 2016; Li et al., 2016a; Adi et al., 2016; Li et al., 2016b; Linzen et al., 2016) .", "Taking it a step further, Gelderloos and Chrupała (2016) and Chrupała et al.", "(2017a) investigate the levels of representations in models which learn language from phonetic transcriptions and from the speech signal, respectively.", "Neither of these tackles the representation of phonology in any great depth.", "Instead they work with relatively coarse-grained distinctions between form and meaning.", "In the current work we use controlled synthetic stimuli, as well as alignment between the audio signal and phonetic transcription of spoken utterances to extract phoneme representation vectors based on the activations on the hidden layers of a model of grounded speech perception.", "We use these representations to carry out analyses of the representation of phonemes at a fine-grained level.", "In a series of experiments, we show that the lower layers of the model encode accurate representations of the phonemes which can be used in phoneme identification and classification with high accuracy.", "We further investigate how the phoneme inventory is organised in the activation space of the model.", "Finally, we tackle the general issue of the representation of phonological form versus meaning with a controlled task of synonym discrimination.", "Our results show that the bottom layers in the multi-layer recurrent neural network learn invariances which enable it to encode phonemes independently of co-articulatory context, and that they represent phonemic categories closely matching usual classifications from linguistics.", "Phonological form becomes harder to detect in higher layers of the network, which increasingly focus on representing meaning over form, but encoding of phonology persists to a significant degree up to the top recurrent layer.", "We make the data and open-source code to reproduce our results publicly available at github.com/gchrupala/encoding-of-phonology.", "Related Work Research on encoding of phonology has been carried out from a psycholinguistics as well as computational modeling perspectives.", "Below we review both types of work.", "Phoneme perception Co-articulation and interspeaker variability make it impossible to define unique acoustic patterns for each phoneme.", "In an early experiment, Liberman et al.", "(1967) analyzed the acoustic properties of the /d/ sound in the two syllables /di/ and /du/.", "They found that while humans easily noticed differences between the two instances when /d/ was played in isolation, they perceived the /d/ as be-ing the same when listening to the complete syllables.", "This phenomenon is often referred to as categorical perception: acoustically different stimuli are perceived as the same.", "In another experiment Lisker and Abramson (1967) used the two syllables /ba/ and /pa/ which only differ in their voice onset time (VOT), and created a continuum moving from syllables with short VOT to syllables with increasingly longer VOT.", "Participants identified all consonants with VOT below 25 msec as being /b/ and all consonant with VOT above 25 msec as being /p/.", "There was no grey area in which both interpretations of the sound were equally likely, which suggests that the phonemes were perceived categorically.", "Supporting findings also come from discrimination experiments: when one consonant has a VOT below 25 msec and the other above, people perceive the two syllables as being different (/ba/ and /pa/ respectively), but they do not notice any differences in the acoustic signal when both syllables have a VOT below or above 25 msec (even when these sounds are physically further away from each other than two sounds that cross the 25 msec dividing line).", "Evidence from infant speech perception studies suggests that infants also perceive phonemes categorically (Eimas et al., 1971) : one-and fourmonth old infants were presented with multiple syllables from the continuum of /ba/ to /pa/ sounds described above.", "As long as the syllables all came from above or below the 25 msec line, the infants showed no change in behavior (measured by their amount of sucking), but when presented with a syllable crossing that line, the infants reacted differently.", "This suggests that infants, just like adults, perceive speech sounds as belonging to discrete categories.", "Dehaene-Lambertz and Gliga (2004) also showed that the same neural systems are activated for both infants and adults when performing this task.", "Importantly, languages differ in their phoneme inventories; for example English distinguishes /r/ from /l/ while Japanese does not, and children have to learn which categories to use.", "Experimental evidence suggests that infants can discriminate both native and nonnative speech sound differences up to 8 months of age, but have difficulty discriminating acoustically similar nonnative contrasts by 10-12 months of age (Werker and Hensch, 2015) .", "These findings suggest that by their first birthday, they have learned to focus only on those contrasts that are relevant for their native language and to neglect those which are not.", "Psycholinguistic theories assume that children learn the categories of their native language by keeping track of the frequency distribution of acoustic sounds in their input.", "The forms around peaks in this distribution are then perceived as being a distinct category.", "Recent computational models showed that infant-directed speech contains sufficiently clear peaks for such a distributional learning mechanism to succeed and also that top-down processes like semantic knowledge and visual information play a role in phonetic category learning (ter Schure et al., 2016) .", "From the machine learning perspective categorical perception corresponds to the notion of learning invariances to certain properties of the input.", "With the experiments in Section 4 we attempt to gain some insight into this issue.", "Computational models There is a sizeable body of work on using recurrent neural (and other) networks to detect phonemes or phonetic features as a subcomponent of an ASR system.", "King and Taylor (2000) train recurrent neural networks to extract phonological features from framewise cepstral representation of speech in the TIMIT speaker-independent database.", "Frankel et al.", "(2007) introduce a dynamic Bayesian network for articulatory (phonetic) feature recognition as a component of an ASR system.", "Siniscalchi et al.", "(2013) show that a multilayer perceptron can successfully classify phonological features and contribute to the accuracy of a downstream ASR system.", "Mohamed et al.", "(2012) use a Deep Belief Network (DBN) for acoustic modeling and phone recognition on human speech.", "They analyze the impact of the number of layers on phone recognition error rate, and visualize the MFCC vectors as well as the learned activation vectors of the hidden layers of the model.", "They show that the representations learned by the model are more speakerinvariant than the MFCC features.", "These works directly supervise the networks to recognize phonological information.", "Another supervised but multimodal approach is taken by Sun (2016) , which uses grounded speech for improving a supervised model of transcribing utterances from spoken description of images.", "We on the other hand are more interested in understand-ing how the phonological level of representation emerges from weak supervision via correlated signal from the visual modality.", "There are some existing models which learn language representations from sensory input in such a weakly supervised fashion.", "For example Roy and Pentland (2002) use spoken utterances paired with images of objects, and search for segments of speech that reliably co-occur with visual shapes.", "Yu and Ballard (2004) use a similar approach but also include non-verbal cues such as gaze and gesture into the input for unsupervised learning of words and their visual meaning.", "These language learning models use rich input signals, but are very limited in scale and variation.", "A separate line of research has used neural networks for modeling phonology from a (neuro)cognitive perspective.", "Burgess and Hitch (1999) implement a connectionist model of the so-called phonological loop, i.e.", "the posited working memory which makes phonological forms available for recall (Baddeley and Hitch, 1974) .", "Gasser and Lee (1989) show that Simple Recurrent Networks are capable of acquiring phonological constraints such as vowel harmony or phonological alterations at morpheme boundaries.", "Touretzky and Wheeler (1989) present a connectionist architecture which performs multiple simultaneous insertion, deletion, and mutation operations on sequences of phonemes.", "In this body of work the input to the network is at the level of phonemes or phonetic features, not acoustic features, and it is thus more concerned with the rules governing phonology and does not address how representations of phonemes arise from exposure to speech in the first place.", "Moreover, the early connectionist work deals with constrained, toy datasets.", "Current neural network architectures and hardware enable us to use much more realistic inputs with the potential to lead to qualitatively different results.", "Model As our model of language acquisition from grounded speech signal we adopt the Recurrent Highway Network-based model of Chrupała et al.", "(2017a) .", "This model has two desirable properties: firstly, thanks to the analyses carried in that work, we understand roughly how the hidden layers differ in terms of the level of linguistic representation they encode.", "Secondly, the model is trained on clean synthetic speech which makes it appropri-ate to use for the controlled experiments in Section 5.2.", "We refer the reader to Chrupała et al.", "(2017a) for a detailed description of the model architecture.", "Here we give a brief overview.", "The model exploits correlations between two modalities, i.e.", "speech and vision, as a source of weak supervision for learning to understand speech; in other words it implements language acquisition from the speech signal grounded in visual perception.", "The architecture is a bi-modal network whose learning objective is to project spoken utterances and images to a joint semantic space, such that corresponding pairs (u, i) (i.e.", "an utterance and the image it describes) are close in this space, while unrelated pairs are far away, by a margin α: (1) u,i u max[0, α + d(u, i) − d(u , i)] + i max[0, α + d(u, i) − d(u, i )] where d(u, i) is the cosine distance between the encoded utterance u and encoded image i.", "The image encoder part of the model uses image vectors from a pretrained object classification model, VGG-16 (Simonyan and Zisserman, 2014) , and uses a linear transform to directly project these to the joint space.", "The utterance encoder takes Mel-frequency Cepstral Coefficients (MFCC) as input, and transforms it successively according to: enc u (u) = unit(Attn(RHN k,L (Conv s,d,z (u)))) (2) The first layer Conv s,d,z is a one-dimensional convolution of size s which subsamples the input with stride z, and projects it to d dimensions.", "It is followed by RHN k,L which consists of k residualized recurrent layers.", "Specifically these are Recurrent Highway Network layers (Zilly et al., 2016) , which are closely related to GRU networks, with the crucial difference that they increase the depth of the transform between timesteps; this is the recurrence depth L. The output of the final recurrent layer is passed through an attention-like lookback operator Attn which takes a weighted average of the activations across time steps.", "Finally, both utterance and image projections are L2-normalized.", "See Section 4.1 for details of the model configuration.", "Vowels i I U u e E @ Ä OI O o aI ae 2 A aU Approximants j ô l w Nasals m n N Plosives p b t d k g Fricatives f v T D s z S Z h Affricates Ù Ã Experimental data and setup The phoneme representations in each layer are calculated as the activations averaged over the duration of the phoneme occurrence in the input.", "The average input vectors are similarly calculated as the MFCC vectors averaged over the time course of the articulation of the phoneme occurrence.", "When we need to represent a phoneme type we do so by averaging the vectors of all its occurrences in the validation set.", "Table 1 shows the phoneme inventory we work with; this is also the inventory used by Gentle/Kaldi (see Section 4.3).", "Model settings We use the pre-trained version of the COCO Speech model, implemented in Theano (Bastien et al., 2012) , provided by Chrupała et al.", "dataset (Lin et al., 2014) where speech was synthesized for the original image descriptions, using high-quality speech synthesis provided by gTTS.", "2 Forced alignment We aligned the speech signal to the corresponding phonemic transcription with the Gentle toolkit, 3 which in turn is based on Kaldi (Povey et al., 2011) .", "It uses a speech recognition model for English to transcribe the input audio signal, and then finds the optimal alignment of the transcription to the signal.", "This fails for a small number of utterances, which we remove from the data.", "In the next step we extract MFCC features from the audio signal and pass them through the COCO Speech utterance encoder, and record the activations for the convolutional layer as well as all the recurrent layers.", "For each utterance the representations (i.e.", "MFCC features and activations) are stored in a t r × D r matrix, where t r and D r are the number of times steps and the dimensionality, respectively, for each representation r. Given the alignment of each phoneme token to the underlying audio, we then infer the slice of the representation matrix corresponding to it.", "Experiments In this section we report on four experiments which we designed to elucidate to what extent information about phonology is represented in the activations of the layers of the COCO Speech model.", "In Section 5.1 we quantify how easy it is to decode phoneme identity from activations.", "In Section 5.2 we determine phoneme discriminability in a controlled task with minimal pair stimuli.", "Section 5.3 shows how the phoneme inventory is organized in the activation space of the model.", "Finally, in Section 5.4 we tackle the general issue of the representation of phonological form versus meaning with the controlled task of synonym discrimination.", "Phoneme decoding In this section we quantify to what extent phoneme identity can be decoded from the input MFCC features as compared to the representations extracted from the COCO speech.", "As explained in Section 4.3, we use phonemic transcriptions aligned to the corresponding audio in order to segment the signal into chunks corresponding to individual phonemes.", "We take a sample of 5000 utterances from the validation set of Synthetically Spoken COCO, and extract the force-aligned representations from the Speech COCO model.", "We split this data into 2 3 training and 1 3 heldout portions, and use supervised classification in order to quantify the recoverability of phoneme identities from the representations.", "Each phoneme slice is averaged over time, so that it becomes a D r -dimensional vector.", "For each representation we then train L2-penalized logistic regression (with the fixed penalty weight 1.0) on the training data and measure classification error rate on the heldout portion.", "Figure 1 shows the results.", "As can be seen from this plot, phoneme recoverability is poor for the representations based on MFCC and the convolutional layer activations, but improves markedly for the recurrent layers.", "Phonemes are easiest recovered from the activations at recurrent layers 1 and 2, and the accuracy decreases thereafter.", "This suggests that the bottom recurrent layers of the model specialize in recognizing this type of low-level phonological information.", "It is notable however that even the last recurrent layer encodes phoneme identity to a substantial degree.", "The MFCC features do much better than majority baseline (89% error rate) but poorly reltive to the the recurrent layers.", "Averaging across phoneme durations may be hurting performance, but interestingly, the network can overcome this and form more robust phoneme representations in the activation patterns.", "data.", "They propose a set of tasks called Minimal-Pair ABX tasks that allow to make linguistically precise comparisons between syllable pairs that only differ by one phoneme.", "They use variants of this task to study phoneme discrimination across talkers and phonetic contexts as well as talker discrimination across phonemes.", "Phoneme discrimination Here we evaluate the COCO Speech model on the Phoneme across Context (PaC) task of Schatz et al.", "(2013) .", "This task consists of presenting a series of equal-length tuples (A, B, X) to the model, where A and B differ by one phoneme (either a vowel or a consonant), as do B and X, but A and X are not minimal pairs.", "For example, in the tuple (be /bi/, me /mi/, my /maI/), the task is to identify which of the two syllables /bi/ or /mi/ is closer to /maI/.", "The goal is to measure context invariance in phoneme discrimination by evaluating how often the model recognizes X as the syllable closer to B than to A.", "We used a list of all attested consonant-vowel (CV) syllables of American English according to the syllabification method described in Gorman (2013) .", "We excluded the ones which could not be unambiguously represented using English spelling for input to the TTS system (e.g.", "/baU/).", "We then compiled a list of all possible (A, B, X) tuples from this list where (A, B) and (B, X) are minimal pairs, but (A, X) are not.", "This resulted in 34,288 tuples in total.", "For each tuple, we measure sign(dist(A, X) − dist(B, X)), where dist(i, j) is the euclidean distance between the vector rep- Figure 2 : Accuracies for the ABX CV task for the cases where the target and the distractor belong to the same phoneme class.", "Shaded area extends ±1 standard error from the mean.", "resentations of syllables i and j.", "These representations are either the audio feature vectors or the layer activation vectors.", "A positive value for a tuple means that the model has correctly discriminated the phonemes that are shared or different across the syllables.", "Table 3 shows the discrimination accuracy in this task using various representations.", "The pattern is similar to what we observed in the phoneme identification task: best accuracy is achieved using representation vectors from recurrent layers 1 and 2, and it drops as we move further up in the model.", "The accuracy is lowest when final embedding features are used for this task.", "However, the PaC task is most meaningful and challenging where the target and the distractor phonemes belong to the same phoneme class.", "Figure 2 shows the accuracies for this subset of cases, broken down by class.", "As can be seen, the model can discriminate between phonemes with high accuracy across all the layers, and the layer activations are more informative for this task than the MFCC features.", "Again, most phoneme classes seem to be represented more accurately in the lower layers (1-3), and the performance of the model in this task drops as we move towards higher hidden layers.", "There are also clear differences in the pattern of discriminability for the phoneme classes.", "The vowels are especially easy to tell apart, but accuracy on vowels drops most acutely in the higher layers.", "Meanwhile the accuracy on fricatives and approximants starts low, but improves rapidly and peaks around recurrent layer 2.", "The somewhat erratic pattern for nasals and affricates is most likely due to small sample size for these classes, as evident from the wide standard error.", "Organization of phonemes In this section we take a closer look at the underlying organization of phonemes in the model.", "Our experiment is inspired by Khalighinejad et al.", "(2017) who study how the speech signal is represented in the brain at different stages of the auditory pathway by collecting and analyzing electroencephalography responses from participants listening to continuous speech, and show that brain responses to different phoneme categories turn out to be organized by phonetic features.", "We carry out an analogous experiment by analyzing the hidden layer activations of our model in response to each phoneme in the input.", "First, we generated a distance matrix for every pair of phonemes by calculating the Euclidean distance between the phoneme pair's activation vectors for each layer separately, as well as a distance matrix for all phoneme pairs based on their MFCC features.", "Similar to what Khalighinejad et al.", "(2017) report, we observe that the phoneme activations on all layers significantly correlate with the phoneme representations in the speech signal, and these correlations are strongest for the lower layers of the model.", "Figure 3 shows the results.", "We then performed agglomerative hierarchical clustering on phoneme type MFCC and activation vectors, using Euclidean distance as the distance metric and the Ward linkage criterion (Ward Jr, 1963) .", "Figure 5 shows the clustering results for the activation vectors on the first hidden layer.", "The leaf nodes are color-coded according to phoneme classes as specified in Table 1 .", "There is substantial degree of matching between the classes and the structure of the hierarchy, but also some mixing between rounded back vowels and voiced plosives /b/ and /g/, which share articulatory features such as lip movement or tongue position.", "We measured the adjusted Rand Index for the match between the hierarchy induced from each representation against phoneme classes, which were obtained by cutting the tree to divide the cluster into the same number of classes as there are phoneme classes.", "There is a notable drop between the match from MFCC to the activation of the convolutional layer.", "We suspect this may be explained by the loss of information caused by averaging over phoneme instances combined with the lower temporal resolution of the activations compared to MFCC.", "The match improves markedly at recurrent layer 1.", "Synonym discrimination Next we simulate the task of distinguishing between pairs of synonyms, i.e.", "words with different acoustic forms but the same meaning.", "With a representation encoding phonological form, our expectation is that the task would be easy; in contrast, with a representation which is invariant to phonological form in order to encode meaning, the task would be hard.", "We generate a list of synonyms for each noun, verb and adjective in the validation data using Wordnet (Miller, 1995) synset membership as a criterion.", "Out of these generated word pairs, we select synonyms for the experiment based on the following criteria: • both forms clearly are synonyms in the sense that one word can be replaced by the other without changing the meaning of a sentence, • both forms appear more than 20 times in the validation data, • the words differ clearly in form (i.e.", "they are not simply variant spellings like donut/doughnut, grey/gray), • the more frequent form constitutes less than 95% of the occurrences.", "This gives us 2 verb, 2 adjective and 21 noun pairs.", "For each synonym pair, we select the sentences in the validation set in which one of the two forms appears.", "We use the POS-tagging feature of NLTK (Bird, 2006) to ensure that only those sentences are selected in which the word appears in the correct word category (e.g.", "play and show are synonyms when used as nouns, but not when used as verbs).", "We then generate spoken utterances in which the original word is replaced by its synonym, resulting in the same amount of utterances for both words of each synonym pair.", "For each pair we generate a binary classification task using the MFCC features, the average activations in the convolutional layer, the average unit activations per recurrent layer, and the sentence embeddings as input features.", "For every type of input, we run 10-fold cross validation using Logistic Regression to predict which of the two words the utterance contains.", "We used an average of 672 (minimum 96; maximum 2282) utterances for training the classifiers.", "Figure 6 shows the error rate in this classification task for each layer and each synonym pair.", "Recurrent layer activations are more informative for this task than MFCC features or activations of the convolutional layer.", "Across all the recurrent layers the error rate is small, showing that some form of phonological information is present throughout this part of the model.", "However, sentence embeddings give relatively high error rates suggesting that the attention layer acts to focus on semantic information and to filter out much of phonological form.", "Discussion Understanding distributed representations learned by neural networks is important but has the reputation of being hard or even impossible.", "In this work we focus on making progress on this problem for a particular domain: representations of phonology in a multilayer recurrent neural network trained on grounded speech signal.", "We believe it is important to carry out multiple analyses using diverse methodology: any single experiment may be misleading as it depends on analytical choices such as the type of supervised model used for decoding, the algorithm used for clustering, or the similarity metric for representational similarity analysis.", "To the extent that more than one experiment points to the same conclusion our confidence in the reliability of the insights gained will be increased.", "Earlier work (Chrupała et al., 2017a) shows that encoding of semantics in our RNN model of grounded speech becomes stronger in higher layers, while encoding of form becomes weaker.", "The main high-level results of our study confirm this pattern by showing that the representation of phonological knowledge is most accurate in the lower layers of the model.", "This general pattern is to be expected as the objective of the utterance encoder is to transform the input acoustic features in such a way that it can be matched to its counterpart in a completely separate modality.", "Many of the details of how this happens, however, are far from obvious: perhaps most surprisingly we found that a large amount of phonological information is still available up to the top recurrent layer.", "Evidence for this pattern emerges from the phoneme decoding task, the ABX task and the synonym discrimination task.", "The last one also shows that the attention layer filters out and significantly attenuates encoding of phonology and makes the utterance embeddings much more invariant to synonymy.", "Our model is trained on synthetic speech, which is easier to process than natural human-generated speech.", "While small-scale databases of natural speech and image are available (e.g.", "the Flickr8k Audio Caption Corpus, Harwath and Glass, 2015) , they are not large enough to reliably train models such as ours.", "In future we would like to collect more data and apply our methodology to grounded human speech and investigate whether context and speaker-invariant phoneme representations can be learned from natural, noisy input.", "We would also like to make comparisons to the results that emerge from similar analyses applied to neuroimaging data." ] }
{ "paper_header_number": [ "1", "2", "2.1", "2.2", "3", "4", "4.1", "4.3", "5", "5.1", "5.2", "5.3", "5.4", "6" ], "paper_header_content": [ "Introduction", "Related Work", "Phoneme perception", "Computational models", "Model", "Experimental data and setup", "Model settings", "Forced alignment", "Experiments", "Phoneme decoding", "Phoneme discrimination", "Organization of phonemes", "Synonym discrimination", "Discussion" ] }
GEM-SciDuet-train-82#paper-1212#slide-9
Conclusion
Phoneme representations are most salient in lower layers Large amount of phonological information persists up to the top recurrent layer The attention layer filters out and significantly attenuates encoding of phonology and makes utterance embeddings more invariant to synonymy
Phoneme representations are most salient in lower layers Large amount of phonological information persists up to the top recurrent layer The attention layer filters out and significantly attenuates encoding of phonology and makes utterance embeddings more invariant to synonymy
[]
GEM-SciDuet-train-83#paper-1214#slide-0
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-0
Introduction Keyphrase
o Short texts highly summarize the significant content of a document o Knowledge mining (concept) o Information retrieval (indexing term) o Provided by authors/editors This work aims to o obtain keyphrases from scientific papers
o Short texts highly summarize the significant content of a document o Knowledge mining (concept) o Information retrieval (indexing term) o Provided by authors/editors This work aims to o obtain keyphrases from scientific papers
[]
GEM-SciDuet-train-83#paper-1214#slide-1
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-1
Background Previous Approaches
Recommender systems play an important role in reducing the negative impact of information overload on those websites where u sers have the possibility of voting for their preferences on items Candidates must be acquired from the 1. Find candidates (noun phrase etc.) source text. recommender systems, important role, negative impact, information overload, websites, users, possibility of voting, preferences, items Only able to predict phrases appear in text 2. Scoring Highly rely on manual fea ture design Dataset % Present % Absent simple featur es can hard ly represen t Inspec Krapivin deep semant ics NUS 3. Rank and return Top K neither flexib le nor scala ble SemEval recommender systems (0.733) information overload (0.524) preferences (0.197) websites (0.132), negative impact (0.057)
Recommender systems play an important role in reducing the negative impact of information overload on those websites where u sers have the possibility of voting for their preferences on items Candidates must be acquired from the 1. Find candidates (noun phrase etc.) source text. recommender systems, important role, negative impact, information overload, websites, users, possibility of voting, preferences, items Only able to predict phrases appear in text 2. Scoring Highly rely on manual fea ture design Dataset % Present % Absent simple featur es can hard ly represen t Inspec Krapivin deep semant ics NUS 3. Rank and return Top K neither flexib le nor scala ble SemEval recommender systems (0.733) information overload (0.524) preferences (0.197) websites (0.132), negative impact (0.057)
[]
GEM-SciDuet-train-83#paper-1214#slide-2
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-2
Motivation Revisit Keyphrase Generation
How do humans assign keyphrases? Understand and get contextual information Summarize and write down the most Get hints from text, copy certain phrases Can machine simulate this process? topic tracking Memory Recurrent Neural Networks [Step 1-3] multilingual Copy Mechanism [Step 4] Write Keyphrase text mining
How do humans assign keyphrases? Understand and get contextual information Summarize and write down the most Get hints from text, copy certain phrases Can machine simulate this process? topic tracking Memory Recurrent Neural Networks [Step 1-3] multilingual Copy Mechanism [Step 4] Write Keyphrase text mining
[]
GEM-SciDuet-train-83#paper-1214#slide-3
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-3
Methodology Recurrent Neural Networks
Memory Context tracking Prob=0.027 Encoder-decoder model (Seq2seq) latent Prob=0.101 dirichlet allocation Prob=0.01 o One and one mining text Prob=0.014 Prob=0.093 o Gated recurrent units (GRU) cell analysis Prob=0.003 o Decoder generates multiple short sequences by beam search Memory Context topic tracking Prob=0.027 Encoder-decoder model (Seq2seq) latent dirichlet allocatioPnrob= o One and one text mining Prob=0.014 o Rank them and return the top K results unk unk unk topic tracking Problem of RNN model RNN Dictionary Keep everything in memory 0k words Only train vectors for top 50k high-frequency words topic Long-tail words are replaced with an unknown symbol <unk> text o Unable to predict long-tail words o Many keyphrases contain long-tail words (2%) multiple language multilingual 50k short-tail words 250k long-tail words
Memory Context tracking Prob=0.027 Encoder-decoder model (Seq2seq) latent Prob=0.101 dirichlet allocation Prob=0.01 o One and one mining text Prob=0.014 Prob=0.093 o Gated recurrent units (GRU) cell analysis Prob=0.003 o Decoder generates multiple short sequences by beam search Memory Context topic tracking Prob=0.027 Encoder-decoder model (Seq2seq) latent dirichlet allocatioPnrob= o One and one text mining Prob=0.014 o Rank them and return the top K results unk unk unk topic tracking Problem of RNN model RNN Dictionary Keep everything in memory 0k words Only train vectors for top 50k high-frequency words topic Long-tail words are replaced with an unknown symbol <unk> text o Unable to predict long-tail words o Many keyphrases contain long-tail words (2%) multiple language multilingual 50k short-tail words 250k long-tail words
[]
GEM-SciDuet-train-83#paper-1214#slide-4
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-4
Methodology Copy Mechanism
unk unk unk topic tracking native language hypothesis CopyRNN Model RNN Dictionary o Copy words from input text 0k words o Locate the words of interest by contextual topic o Copy corresponding part to output text multiple o Enhance the RNN with extractive ability language multilingual 50k short-tail words 250k long-tail words
unk unk unk topic tracking native language hypothesis CopyRNN Model RNN Dictionary o Copy words from input text 0k words o Locate the words of interest by contextual topic o Copy corresponding part to output text multiple o Enhance the RNN with extractive ability language multilingual 50k short-tail words 250k long-tail words
[]
GEM-SciDuet-train-83#paper-1214#slide-5
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-5
Experiment Dataset
All data are scientific papers in Computer Science domain Collected from Elsevier, ACM Digital Library, Web of Science etc. o # (Unique word) Four commonly used datasets, only use abstract text Overlapping papers are removed from training dataset Dataset # Paper # All (Avg) # Present # Absent % Absent
All data are scientific papers in Computer Science domain Collected from Elsevier, ACM Digital Library, Web of Science etc. o # (Unique word) Four commonly used datasets, only use abstract text Overlapping papers are removed from training dataset Dataset # Paper # All (Avg) # Present # Absent % Absent
[]
GEM-SciDuet-train-83#paper-1214#slide-6
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-6
Experiment Experiment Setup
Process ground-truth and predicted phrases with Porter stemmer Macro-average of precision, recall and F-measure @5,@10 o Compare to previous studies: Tf-Idf, TextRank, SingleRank, ExpandRank, KEA, Maui o No baseline comparison Transfer to news dataset
Process ground-truth and predicted phrases with Porter stemmer Macro-average of precision, recall and F-measure @5,@10 o Compare to previous studies: Tf-Idf, TextRank, SingleRank, ExpandRank, KEA, Maui o No baseline comparison Transfer to news dataset
[]
GEM-SciDuet-train-83#paper-1214#slide-7
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-7
Result Task 1 Predict Present Keyphrase
Dataset Inspec Krapivin NUS SemEval KP20k Naive RNN model fails to compete with baseline models CopyRNN models outperform baseline models and RNN significantly. Copy mechanism can capture key information in source text.
Dataset Inspec Krapivin NUS SemEval KP20k Naive RNN model fails to compete with baseline models CopyRNN models outperform baseline models and RNN significantly. Copy mechanism can capture key information in source text.
[]
GEM-SciDuet-train-83#paper-1214#slide-8
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-8
Result Task 3 Transfer to News Articles
So far training and testing are only about scientific papers What if transfer it to a completely unseen domain o Does model learn any universal feature? Test the CopyRNN on DUC-2001 o 308 news articles and 2,488 keyphrases o CopyRNN recalls 766 keyphrases. 14.3% contain out-of-vocabulary words o Many names of persons and places are correctly predicted.
So far training and testing are only about scientific papers What if transfer it to a completely unseen domain o Does model learn any universal feature? Test the CopyRNN on DUC-2001 o 308 news articles and 2,488 keyphrases o CopyRNN recalls 766 keyphrases. 14.3% contain out-of-vocabulary words o Many names of persons and places are correctly predicted.
[]
GEM-SciDuet-train-83#paper-1214#slide-9
1214
Deep Keyphrase Generation
Keyphrase provides highly-summative information that can be effectively used for understanding, organizing and retrieving text content. Though previous studies have provided many workable solutions for automated keyphrase extraction, they commonly divided the to-be-summarized content into multiple text chunks, then ranked and selected the most meaningful ones. These approaches could neither identify keyphrases that do not appear in the text, nor capture the real semantic meaning behind the text. We propose a generative model for keyphrase prediction with an encoder-decoder framework, which can effectively overcome the above drawbacks. We name it as deep keyphrase generation since it attempts to capture the deep semantic meaning of the content with a deep learning method. Empirical analysis on six datasets demonstrates that our proposed model not only achieves a significant performance boost on extracting keyphrases that appear in the source text, but also can generate absent keyphrases based on the semantic meaning of the text. Code and dataset are available at https://github.com/memray/seq2seqkeyphrase.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225 ], "paper_content_text": [ "Introduction A keyphrase or keyword is a piece of short, summative content that expresses the main semantic meaning of a longer text.", "The typical use of a keyphrase or keyword is in scientific publications to provide the core information of a paper.", "We use * Corresponding author the term \"keyphrase\" interchangeably with \"keyword\" in the rest of this paper, as both terms have an implication that they may contain multiple words.", "High-quality keyphrases can facilitate the understanding, organizing, and accessing of document content.", "As a result, many studies have focused on ways of automatically extracting keyphrases from textual content (Liu et al., 2009; Medelyan et al., 2009a; .", "Due to public accessibility, many scientific publication datasets are often used as test beds for keyphrase extraction algorithms.", "Therefore, this study also focuses on extracting keyphrases from scientific publications.", "Automatically extracting keyphrases from a document is called keypharase extraction, and it has been widely used in many applications, such as information retrieval (Jones and Staveley, 1999) , text summarization (Zhang et al., 2004 ), text categorization (Hulth and Megyesi, 2006) , and opinion mining (Berend, 2011) .", "Most of the existing keyphrase extraction algorithms have addressed this problem through two steps (Liu et al., 2009; Tomokiyo and Hurst, 2003) .", "The first step is to acquire a list of keyphrase candidates.", "Researchers have tried to use n-grams or noun phrases with certain part-of-speech patterns for identifying potential candidates (Hulth, 2003; Le et al., 2016; Liu et al., 2010; .", "The second step is to rank candidates on their importance to the document, either through supervised or unsupervised machine learning methods with a set of manually-defined features Liu et al., 2009 Liu et al., , 2010 Kelleher and Luz, 2005; Matsuo and Ishizuka, 2004; Mihalcea and Tarau, 2004; Song et al., 2003; .", "There are two major drawbacks in the above keyphrase extraction approaches.", "First, these methods can only extract the keyphrases that ap-pear in the source text; they fail at predicting meaningful keyphrases with a slightly different sequential order or those that use synonyms.", "However, authors of scientific publications commonly assign keyphrases based on their semantic meaning, instead of following the written content in the publication.", "In this paper, we denote phrases that do not match any contiguous subsequence of source text as absent keyphrases, and the ones that fully match a part of the text as present keyphrases.", "Table 1 shows the proportion of present and absent keyphrases from the document abstract in four commonly-used datasets, from which we can observe large portions of absent keyphrases in all the datasets.", "The absent keyphrases cannot be extracted through previous approaches, which further prompts the development of a more powerful keyphrase prediction model.", "Second, when ranking phrase candidates, previous approaches often adopted machine learning features such as TF-IDF and PageRank.", "However, these features only target to detect the importance of each word in the document based on the statistics of word occurrence and co-occurrence, and are unable to reveal the full semantics that underlie the document content.", "To overcome the limitations of previous studies, we re-examine the process of keyphrase prediction with a focus on how real human annotators would assign keyphrases.", "Given a document, human annotators will first read the text to get a basic understanding of the content, then they try to digest its essential content and summarize it into keyphrases.", "Their generation of keyphrases relies on an understanding of the content, which may not necessarily use the exact words that occur in the source text.", "For example, when human annotators see \"Latent Dirichlet Allocation\" in the text, they might write down \"topic modeling\" and/or \"text mining\" as possible keyphrases.", "In addition to the semantic understanding, human annotators might also go back and pick up the most important parts, based on syntactic features.", "For example, the phrases following \"we propose/apply/use\" could be important in the text.", "As a result, a better keyphrase prediction model should understand the semantic meaning of the content, as well as capture the contextual features.", "To effectively capture both the semantic and syntactic features, we use recurrent neural networks (RNN) Gers and Schmidhuber, 2001) to compress the semantic information in the given text into a dense vector (i.e., semantic understanding).", "Furthermore, we incorporate a copying mechanism (Gu et al., 2016) to allow our model to find important parts based on positional information.", "Thus, our model can generate keyphrases based on an understanding of the text, regardless of the presence or absence of keyphrases in the text; at the same time, it does not lose important in-text information.", "The contribution of this paper is three-fold.", "First, we propose to apply an RNN-based generative model to keyphrase prediction, as well as incorporate a copying mechanism in RNN, which enables the model to successfully predict phrases that rarely occur.", "Second, this is the first work that concerns the problem of absent keyphrase prediction for scientific publications, and our model recalls up to 20% of absent keyphrases.", "Third, we conducted a comprehensive comparison against six important baselines on a broad range of datasets, and the results show that our proposed model significantly outperforms existing supervised and unsupervised extraction methods.", "In the remainder of this paper, we first review the related work in Section 2.", "Then, we elaborate upon the proposed model in Section 3.", "After that, we present the experiment setting in Section 4 and results in Section 5, followed by our discussion in Section 6.", "Section 7 concludes the paper.", "Related Work Automatic Keyphrase Extraction A keyphrase provides a succinct and accurate way of describing a subject or a subtopic in a document.", "A number of extraction algorithms have been proposed, and the process of extracting keyphrases can typically be broken down into two steps.", "The first step is to generate a list of phrase can-didates with heuristic methods.", "As these candidates are prepared for further filtering, a considerable number of candidates are produced in this step to increase the possibility that most of the correct keyphrases are kept.", "The primary ways of extracting candidates include retaining word sequences that match certain part-of-speech tag patterns (e.g., nouns, adjectives) (Liu et al., 2011; Le et al., 2016) , and extracting important n-grams or noun phrases (Hulth, 2003; Medelyan et al., 2008) .", "The second step is to score each candidate phrase for its likelihood of being a keyphrase in the given document.", "The top-ranked candidates are returned as keyphrases.", "Both supervised and unsupervised machine learning methods are widely employed here.", "For supervised methods, this task is solved as a binary classification problem, and various types of learning methods and features have been explored Hulth, 2003; Medelyan et al., 2009b; Lopez and Romary, 2010; Gollapalli and Caragea, 2014) .", "As for unsupervised approaches, primary ideas include finding the central nodes in text graph (Mihalcea and Tarau, 2004; Grineva et al., 2009) , detecting representative phrases from topical clusters (Liu et al., 2009 (Liu et al., , 2010 , and so on.", "Aside from the commonly adopted two-step process, another two previous studies realized the keyphrase extraction in entirely different ways.", "Tomokiyo and Hurst (2003) applied two language models to measure the phraseness and informativeness of phrases.", "Liu et al.", "(2011) share the most similar ideas to our work.", "They used a word alignment model, which learns a translation from the documents to the keyphrases.", "This approach alleviates the problem of vocabulary gaps between source and target to a certain degree.", "However, this translation model is unable to handle semantic meaning.", "Additionally, this model was trained with the target of title/summary to enlarge the number of training samples, which may diverge from the real objective of generating keyphrases.", "Zhang et al.", "(2016) proposed a joint-layer recurrent neural network model to extract keyphrases from tweets, which is another application of deep neural networks in the context of keyphrase extraction.", "However, their work focused on sequence labeling, and is therefore not able to predict absent keyphrases.", "Encoder-Decoder Model The RNN Encoder-Decoder model (which is also referred as sequence-to-sequence Learning) is an end-to-end approach.", "It was first introduced by and Sutskever et al.", "(2014) to solve translation problems.", "As it provides a powerful tool for modeling variable-length sequences in an end-to-end fashion, it fits many natural language processing tasks and can rapidly achieve great successes (Rush et al., 2015; Vinyals et al., 2015; Serban et al., 2016) .", "Different strategies have been explored to improve the performance of the Encoder-Decoder model.", "The attention mechanism is a soft alignment approach that allows the model to automatically locate the relevant input components.", "In order to make use of the important information in the source text, some studies sought ways to copy certain parts of content from the source text and paste them into the target text (Allamanis et al., 2016; Gu et al., 2016; Zeng et al., 2016) .", "A discrepancy exists between the optimizing objective during training and the metrics during evaluation.", "A few studies attempted to eliminate this discrepancy by incorporating new training algorithms (Marc'Aurelio Ranzato et al., 2016) or by modifying the optimizing objectives (Shen et al., 2016) .", "Methodology This section will introduce our proposed deep keyphrase generation method in detail.", "First, the task of keyphrase generation is defined, followed by an overview of how we apply the RNN Encoder-Decoder model.", "Details of the framework as well as the copying mechanism will be introduced in Sections 3.3 and 3.4.", "Problem Definition Given a keyphrase dataset that consists of N data samples, the i-th data sample (x (i) , p (i) ) contains one source text x (i) , and M i target keyphrases p (i) = (p (i,1) , p (i,2) , .", ".", ".", ", p (i,M i ) ).", "Both the source text x (i) and keyphrase p (i,j) are sequences of words: x (i) = x (i) 1 , x (i) 2 , .", ".", ".", ", x (i) L x i p (i,j) = y (i,j) 1 , y (i,j) 2 , .", ".", ".", ", y (i,j) L p (i,j) L x (i) and L p (i,j) denotes the length of word sequence of x (i) and p (i,j) respectively.", "Each data sample contains one source text sequence and multiple target phrase sequences.", "To apply the RNN Encoder-Decoder model, the data need to be converted into text-keyphrase pairs that contain only one source sequence and one target sequence.", "We adopt a simple way, which splits the data sample ( x (i) , p (i) ) into M i pairs: (x (i) , p (i,1) ), (x (i) , p (i,2) ), .", ".", ".", ", (x (i) , p (i,M i ) ).", "Then the Encoder-Decoder model is ready to be applied to learn the mapping from the source sequence to target sequence.", "For the purpose of simplicity, (x, y) is used to denote each data pair in the rest of this section, where x is the word sequence of a source text and y is the word sequence of its keyphrase.", "Encoder-Decoder Model The basic idea of our keyphrase generation model is to compress the content of source text into a hidden representation with an encoder and to generate corresponding keyphrases with the decoder, based on the representation .", "Both the encoder and decoder are implemented with recurrent neural networks (RNN).", "The encoder RNN converts the variable-length input sequence x = (x 1 , x 2 , ..., x T ) into a set of hidden representation h = (h 1 , h 2 , .", ".", ".", ", h T ), by iterating the following equations along time t: h t = f (x t , h t−1 ) (1) where f is a non-linear function.", "We get the context vector c acting as the representation of the whole input x through a non-linear function q. c = q(h 1 , h 2 , ..., h T ) (2) The decoder is another RNN; it decompresses the context vector and generates a variable-length sequence y = (y 1 , y 2 , ..., y T ) word by word, through a conditional language model: s t = f (y t−1 , s t−1 , c) p(y t |y 1,...,t−1 , x) = g(y t−1 , s t , c) (3) where s t is the hidden state of the decoder RNN at time t. The non-linear function g is a softmax classifier, which outputs the probabilities of all the words in the vocabulary.", "y t is the predicted word at time t, by taking the word with largest probability after g(·).", "The encoder and decoder networks are trained jointly to maximize the conditional probability of the target sequence, given a source sequence.", "After training, we use the beam search to generate phrases and a max heap is maintained to get the predicted word sequences with the highest probabilities.", "Details of the Encoder and Decoder A bidirectional gated recurrent unit (GRU) is applied as our encoder to replace the simple recurrent neural network.", "Previous studies indicate that it can generally provide better performance of language modeling than a simple RNN and a simpler structure than other Long Short-Term Memory networks (Hochreiter and Schmidhuber, 1997) .", "As a result, the above non-linear function f is replaced by the GRU function (see in ).", "Another forward GRU is used as the decoder.", "In addition, an attention mechanism is adopted to improve performance.", "The attention mechanism was firstly introduced by to make the model dynamically focus on the important parts in input.", "The context vector c is computed as a weighted sum of hidden representation h = (h 1 , .", ".", ".", ", h T ): c i = T j=1 α ij h j α ij = exp(a(s i−1 , h j )) T k=1 exp(a(s i−1 , h k )) (4) where a(s i−1 , h j ) is a soft alignment function that measures the similarity between s i−1 and h j ; namely, to which degree the inputs around position j and the output at position i match.", "Copying Mechanism To ensure the quality of learned representation and reduce the size of the vocabulary, typically the RNN model considers a certain number of frequent words (e.g.", "30,000 words in ), but a large amount of long-tail words are simply ignored.", "Therefore, the RNN is not able to recall any keyphrase that contains out-ofvocabulary words.", "Actually, important phrases can also be identified by positional and syntactic information in their contexts, even though their exact meanings are not known.", "The copying mechanism (Gu et al., 2016) is one feasible solution that enables RNN to predict out-of-vocabulary words by selecting appropriate words from the source text.", "By incorporating the copying mechanism, the probability of predicting each new word y t consists of two parts.", "The first term is the probability of generating the term (see Equation 3 ) and the second one is the probability of copying it from the source text: p(y t |y 1,...,t−1 , x) = p g (y t |y 1,...,t−1 , x) + p c (y t |y 1,...,t−1 , x) (5) Similar to attention mechanism, the copying mechanism weights the importance of each word in source text with a measure of positional attention.", "But unlike the generative RNN which predicts the next word from all the words in vocabulary, the copying part p c (y t |y 1,...,t−1 , x) only considers the words in source text.", "Consequently, on the one hand, the RNN with copying mechanism is able to predict the words that are out of vocabulary but in the source text; on the other hand, the model would potentially give preference to the appearing words, which caters to the fact that most keyphrases tend to appear in the source text.", "p c (y t |y 1,...,t−1 , x) = 1 Z j:x j =yt exp(ψ c (x j )), y ∈ χ ψ c (x j ) = σ(h T j W c )s t (6) where χ is the set of all of the unique words in the source text x, σ is a non-linear function and W c ∈ R is a learned parameter matrix.", "Z is the sum of all the scores and is used for normalization.", "Please see (Gu et al., 2016) for more details.", "Experiment Settings This section begins by discussing how we designed our evaluation experiments, followed by the description of training and testing datasets.", "Then, we introduce our evaluation metrics and baselines.", "Training Dataset There are several publicly-available datasets for evaluating keyphrase generation.", "The largest one came from Krapivin et al.", "(2008) , which contains 2,304 scientific publications.", "However, this amount of data is unable to train a robust recurrent neural network model.", "In fact, there are millions of scientific papers available online, each of which contains the keyphrases that were assigned by their authors.", "Therefore, we collected a large amount of high-quality scientific metadata in the computer science domain from various online digital libraries, including ACM Digital Library, Sci-enceDirect, Wiley, and Web of Science etc.", "(Han et al., 2013; Rui et al., 2016) .", "In total, we obtained a dataset of 567,830 articles, after removing duplicates and overlaps with testing datasets, which is 200 times larger than the one of Krapivin et al.", "(2008) .", "Note that our model is only trained on 527,830 articles, since 40,000 publications are randomly held out, among which 20,000 articles were used for building a new test dataset KP20k.", "Another 20,000 articles served as the validation dataset to check the convergence of our model, as well as the training dataset for supervised baselines.", "Testing Datasets For evaluating the proposed model more comprehensively, four widely-adopted scientific publication datasets were used.", "In addition, since these datasets only contain a few hundred or a few thousand publications, we contribute a new testing dataset KP20k with a much larger number of scientific articles.", "We take the title and abstract as the source text.", "Each dataset is described in detail below.", "-Inspec (Hulth, 2003) : This dataset provides 2,000 paper abstracts.", "We adopt the 500 testing papers and their corresponding uncontrolled keyphrases for evaluation, and the remaining 1,500 papers are used for training the supervised baseline models.", "- Krapivin (Krapivin et al., 2008) : This dataset provides 2,304 papers with full-text and author-assigned keyphrases.", "However, the author did not mention how to split testing data, so we selected the first 400 papers in alphabetical order as the testing data, and the remaining papers are used to train the supervised baselines.", "-NUS (Nguyen and Kan, 2007) : We use the author-assigned keyphrases and treat all 211 papers as the testing data.", "Since the NUS dataset did not specifically mention the ways of splitting training and testing data, the results of the supervised baseline models are obtained through a five-fold cross-validation.", "- SemEval-2010 (Kim et al., 2010 : 288 articles were collected from the ACM Digital Library.", "100 articles were used for testing and the rest were used for training supervised baselines.", "-KP20k: We built a new testing dataset that contains the titles, abstracts, and keyphrases of 20,000 scientific articles in computer science.", "They were randomly selected from our obtained 567,830 articles.", "Due to the memory limits of implementation, we were not able to train the supervised baselines on the whole training set.", "Thus we take the 20,000 articles in the validation set to train the supervised baselines.", "It is worth noting that we also examined their performance by enlarging the training dataset to 50,000 articles, but no significant improvement was observed.", "Implementation Details In total, there are 2,780,316 text, keyphrase pairs for training, in which text refers to the concatenation of the title and abstract of a publication, and keyphrase indicates an author-assigned keyword.", "The text pre-processing steps including tokenization, lowercasing and replacing all digits with symbol digit are applied.", "Two encoderdecoder models are trained, one with only attention mechanism (RNN) and one with both attention and copying mechanism enabled (Copy-RNN).", "For both models, we choose the top 50,000 frequently-occurred words as our vocabulary, the dimension of embedding is set to 150, the dimension of hidden layers is set to 300, and the word embeddings are randomly initialized with uniform distribution in [-0.1,0.1].", "Models are optimized using Adam (Kingma and Ba, 2014) with initial learning rate = 10 −4 , gradient clipping = 0.1 and dropout rate = 0.5.", "The max depth of beam search is set to 6, and the beam size is set to 200.", "The training is stopped once convergence is determined on the validation dataset (namely earlystopping, the cross-entropy loss stops dropping for several iterations).", "In the generation of keyphrases, we find that the model tends to assign higher probabilities for shorter keyphrases, whereas most keyphrases contain more than two words.", "To resolve this problem, we apply a simple heuristic by preserving only the first single-word phrase (with the highest generating probability) and removing the rest.", "Baseline Models Four unsupervised algorithms (Tf-Idf, Tex-tRank (Mihalcea and Tarau, 2004) , SingleRank (Wan and Xiao, 2008) , and ExpandRank (Wan and Xiao, 2008) ) and two supervised algorithms (KEA (Witten et al., 1999) and Maui (Medelyan et al., 2009a) ) are adopted as baselines.", "We set up the four unsupervised methods following the optimal settings in (Hasan and Ng, 2010) , and the two supervised methods following the default setting as specified in their papers.", "Evaluation Metric Three evaluation metrics, the macro-averaged precision, recall and F-measure (F 1 ) are employed for measuring the algorithm's performance.", "Following the standard definition, precision is defined as the number of correctly-predicted keyphrases over the number of all predicted keyphrases, and recall is computed by the number of correctlypredicted keyphrases over the total number of data records.", "Note that, when determining the match of two keyphrases, we use Porter Stemmer for preprocessing.", "Results and Analysis We conduct an empirical study on three different tasks to evaluate our model.", "Predicting Present Keyphrases This is the same as the keyphrase extraction task in prior studies, in which we analyze how well our proposed model performs on a commonly-defined task.", "To make a fair comparison, we only consider the present keyphrases for evaluation in this task.", "Table 2 provides the performances of the six baseline models, as well as our proposed models (i.e., RNN and CopyRNN) .", "For each method, the table lists its F-measure at top 5 and top 10 predictions on the five datasets.", "The best scores are highlighted in bold and the underlines indicate the second best performances.", "The results show that the four unsupervised models (Tf-idf, TextTank, SingleRank and Ex-pandRank) have a robust performance across different datasets.", "The ExpandRank fails to return any result on the KP20k dataset, due to its high time complexity.", "The measures on NUS and Se-mEval here are higher than the ones reported in (Hasan and Ng, 2010) and (Kim et al., 2010) , probably because we utilized the paper abstract instead of the full text for training, which may Method Inspec Krapivin NUS SemEval KP20k F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 F 1 @5 F 1 @10 Table 2 : The performance of predicting present keyphrases of various models on five benchmark datasets filter out some noisy information.", "The performance of the two supervised models (i.e., Maui and KEA) were unstable on some datasets, but Maui achieved the best performances on three datasets among all the baseline models.", "As for our proposed keyphrase prediction approaches, the RNN model with the attention mechanism did not perform as well as we expected.", "It might be because the RNN model is only concerned with finding the hidden semantics behind the text, which may tend to generate keyphrases or words that are too general and may not necessarily refer to the source text.", "In addition, we observe that 2.5% (70,891/2,780,316) of keyphrases in our dataset contain out-of-vocabulary words, which the RNN model is not able to recall, since the RNN model can only generate results with the 50,000 words in vocabulary.", "This indicates that a pure generative model may not fit the extraction task, and we need to further link back to the language usage within the source text.", "The CopyRNN model, by considering more contextual information, significantly outperforms not only the RNN model but also all baselines, exceeding the best baselines by more than 20% on average.", "This result demonstrates the importance of source text to the extraction task.", "Besides, nearly 2% of all correct predictions contained outof-vocabulary words.", "The example in Figure 1(a) shows the result of predicted present keyphrases by RNN and Copy-RNN for an article about video search.", "We see that both models can generate phrases that relate to the topic of information retrieval and video.", "However most of RNN predictions are high-level terminologies, which are too general to be selected as keyphrases.", "CopyRNN, on the other hand, predicts more detailed phrases like \"video metadata\" and \"integrated ranking\".", "An interesting bad case, \"rich content\" coordinates with a keyphrase \"video metadata\", and the CopyRNN mistakenly puts it into prediction.", "Predicting Absent Keyphrases As stated, one important motivation for this work is that we are interested in the proposed model's capability for predicting absent keyphrases based on the \"understanding\" of content.", "It is worth noting that such prediction is a very challenging task, and, to the best of our knowledge, no existing methods can handle this task.", "Therefore, we only provide the RNN and CopyRNN performances in the discussion of the results of this task.", "Here, we evaluate the performance within the recall of the top 10 and top 50 results, to see how many absent keyphrases can be correctly predicted.", "We use the absent keyphrases in the testing datasets for evaluation.", "Table 3 presents the recall results of the top 10/50 predicted keyphrases for our RNN and CopyRNN models, in which we observe that the CopyRNN can, on average, recall around 8% (15%) of keyphrases at top 10 (50) predictions.", "This indicates that, to some extent, both models can capture the hidden semantics behind the textual content and make reasonable predictions.", "In addition, with the advantage of features from the source text, the CopyRNN model also outperforms the RNN model in this condition, though it does not show as much improvement as the present keyphrase extraction task.", "An example is shown in Figure 1(b) , in which we see that two absent keyphrases, \"video retrieval\" and \"video indexing\", are correctly recalled by both models.", "Note that the term \"indexing\" does not appear in the text, but the models may detect the information \"index videos\" in the first sentence and paraphrase it to the target phrase.", "And the CopyRNN successfully predicts another two keyphrases by capturing the detailed information from the text (highlighted text segments).", "Transferring the Model to the News Domain RNN and CopyRNN are supervised models, and they are trained on data in a specific domain and writing style.", "However, with sufficient training on a large-scale dataset, we expect the models to be able to learn universal language features that are also effective in other corpora.", "Thus in this task, we will test our model on another type of text, to see whether the model would work when being transferred to a different environment.", "We use the popular news article dataset DUC-2001 (Wan and Xiao, 2008) for analysis.", "The dataset consists of 308 news articles and 2,488 manually annotated keyphrases.", "The result of this analysis is shown in Table 4 , from which we could see that the CopyRNN can extract a portion of correct keyphrases from a unfamiliar text.", "Compared to the results reported in (Hasan and Ng, 2010) , the performance of CopyRNN is better than Tex-tRank (Mihalcea and Tarau, 2004) and KeyCluster (Liu et al., 2009) , but lags behind the other three baselines.", "As it is transferred to a corpus in a completely different type and domain, the model encounters more unknown words and has to rely more on the positional and syntactic features within the text.", "In this experiment, the CopyRNN recalls 766 keyphrases.", "14.3% of them contain out-ofvocabulary words, and many names of persons and places are correctly predicted.", "Discussion Our experimental results demonstrate that the CopyRNN model not only performs well on predicting present keyphrases, but also has the ability to generate topically relevant keyphrases that are absent in the text.", "In a broader sense, this model attempts to map a long text (i.e., paper abstract) with representative short text chunks (i.e., keyphrases), which can potentially be applied to improve information retrieval performance by generating high-quality index terms, as well as assisting user browsing by summarizing long documents into short, readable phrases.", "Thus far, we have tested our model with scientific publications and news articles, and have demonstrated that our model has the ability to capture universal language patterns and extract key information from unfamiliar texts.", "We believe that our model has a greater potential to be generalized to other domains and types, like books, online reviews, etc., if it is trained on a larger data corpus.", "Also, we directly applied our model, which was trained on a publication dataset, into generating keyphrases for news articles without any adaptive training.", "We believe that with proper training on news data, the model would make further improvement.", "Additionally, this work mainly studies the problem of discovering core content from textual materials.", "Here, the encoder-decoder framework is applied to model language; however, such a framework can also be extended to locate the core information on other data resources, such as summarizing content from images and videos.", "Conclusions and Future Work In this paper, we proposed an RNN-based generative model for predicting keyphrases in scientific text.", "To the best of our knowledge, this is the first application of the encoder-decoder model to a keyphrase prediction task.", "Our model summarizes phrases based the deep semantic meaning of the text, and is able to handle rarely-occurred phrases by incorporating a copying mechanism.", "Comprehensive empirical studies demonstrate the effectiveness of our proposed model for generating both present and absent keyphrases for different types of text.", "Our future work may include the following two directions.", "-In this work, we only evaluated the performance of the proposed model by conducting off-line experiments.", "In the future, we are interested in comparing the model to human annotators and using human judges to evaluate the quality of predicted phrases.", "-Our current model does not fully consider correlation among target keyphrases.", "It would also be interesting to explore the multiple-output optimization aspects of our model." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "3", "3.1", "3.2", "3.3", "3.4", "4", "4.1", "4.2", "4.3", "4.4", "4.5", "5", "5.1", "5.2", "5.3", "6", "7" ], "paper_header_content": [ "Introduction", "Automatic Keyphrase Extraction", "Encoder-Decoder Model", "Methodology", "Problem Definition", "Encoder-Decoder Model", "Details of the Encoder and Decoder", "Copying Mechanism", "Experiment Settings", "Training Dataset", "Testing Datasets", "Implementation Details", "Baseline Models", "Evaluation Metric", "Results and Analysis", "Predicting Present Keyphrases", "Predicting Absent Keyphrases", "Transferring the Model to the News Domain", "Discussion", "Conclusions and Future Work" ] }
GEM-SciDuet-train-83#paper-1214#slide-9
Conclusion Future Work
Keyphrase generation study based on deep learning methods o First work concerns absent keyphrase prediction o RNN + Copy mechanism o Able to learn cross-domain features Better model on capturing contextual information Long documents, length & diversity penalties on output sequences
Keyphrase generation study based on deep learning methods o First work concerns absent keyphrase prediction o RNN + Copy mechanism o Able to learn cross-domain features Better model on capturing contextual information Long documents, length & diversity penalties on output sequences
[]
GEM-SciDuet-train-84#paper-1219#slide-0
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-0
Emotion Models Plutchiks Wheel
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-1
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-1
Previous Work and State of the Art
Motivation Annotation Process and Analysis Baseline Models Name Data Size Domain Sentiment Strength tweets tweets tweets tweets tweets tweets tweets tweets Electoral Tweets descriptions sentences blogs headlines tweets tweets No manually annotated multi-label emotion corpus of Tweets available. (References are in the paper) University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models Name Data Size Domain Sentiment Strength tweets tweets tweets tweets tweets tweets tweets tweets Electoral Tweets descriptions sentences blogs headlines tweets tweets No manually annotated multi-label emotion corpus of Tweets available. (References are in the paper) University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-2
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-2
Task Description and Research Question
Motivation Annotation Process and Analysis Baseline Models (Additional annotation layers available) Whats the inter-annotator agreement? Which annotation layers interact? How well is it possible to computationally estimate such annotations? University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models (Additional annotation layers available) Whats the inter-annotator agreement? Which annotation layers interact? How well is it possible to computationally estimate such annotations? University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-3
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-3
Annotation Process
Annotation of SemEval 2016 Twitter Corpus Stance and sentiment annotations exist Six annotators finished their annotations Minimum number of annotations per Tweet is three 2776 Tweets annotated by four annotators Undergraduate students of media-informatics German native speakers, college-level knowledge of English Training of annotators based on another set of Tweets University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Annotation of SemEval 2016 Twitter Corpus Stance and sentiment annotations exist Six annotators finished their annotations Minimum number of annotations per Tweet is three 2776 Tweets annotated by four annotators Undergraduate students of media-informatics German native speakers, college-level knowledge of English Training of annotators based on another set of Tweets University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-4
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-4
Label Counts
Motivation Annotation Process and Analysis Baseline Models Seldom that all annotators agree Some classes are more difficult (Anticipation, Disgust, Fear, Sadness, Surprise) than others (Anger, Joy, Trust) Low number of majority vote annotations Low quality of annotation combination? University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
Motivation Annotation Process and Analysis Baseline Models Seldom that all annotators agree Some classes are more difficult (Anticipation, Disgust, Fear, Sadness, Surprise) than others (Anger, Joy, Trust) Low number of majority vote annotations Low quality of annotation combination? University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
[]
GEM-SciDuet-train-84#paper-1219#slide-5
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-5
Inter annotator Agreement
Motivation Annotation Process and Analysis Baseline Models Range of pairwise agreement between all annotation pairs University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
Motivation Annotation Process and Analysis Baseline Models Range of pairwise agreement between all annotation pairs University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
[]
GEM-SciDuet-train-84#paper-1219#slide-6
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-6
Difficult Examples 1
Motivation Annotation Process and Analysis Baseline Models Anger Anticipation Disgust Fear Joy Sadness Surprise Trust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models Anger Anticipation Disgust Fear Joy Sadness Surprise Trust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-7
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-7
Difficult Examples 2
Motivation Annotation Process and Analysis Baseline Models 2 pretty sisters are dancing with cancered kid Anger Anticipation Disgust Fear Joy Sadness Surprise Trust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models 2 pretty sisters are dancing with cancered kid Anger Anticipation Disgust Fear Joy Sadness Surprise Trust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-8
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-8
Cooccurrences of Labels t0
Motivation Annotation Process and Analysis Baseline Models Anger Anticipation Disgust Fear Joy Sadness Surprise Trust Positive Negative Neutral In Favor Against None Many cooccurrences as expected (pos w/ pos, neg w/ neg)Positive Anger Negative Joy Positive Disgust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models Anger Anticipation Disgust Fear Joy Sadness Surprise Trust Positive Negative Neutral In Favor Against None Many cooccurrences as expected (pos w/ pos, neg w/ neg)Positive Anger Negative Joy Positive Disgust University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-9
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-9
Examples
Motivation Annotation Process and Analysis Baseline Models Lets take back our country! Whos with me? No more Why criticise religions? If a path is not your own. Dont be pretentious. And get down from your throne. Global Warming! Global Warming! Global Warming! Oh wait, its summer. I love the smell of Hillary in the morning. It smells like #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder! University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models Lets take back our country! Whos with me? No more Why criticise religions? If a path is not your own. Dont be pretentious. And get down from your throne. Global Warming! Global Warming! Global Warming! Oh wait, its summer. I love the smell of Hillary in the morning. It smells like #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder! University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-10
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-10
Models Experimental Setting
Motivation Annotation Process and Analysis Baseline Models 175 dimensional LSTM layer, 0.5 dropout rate 50 dimensional dense layer Convolution of window size 2,3,4 Pooling of length 2 (Twitter specific embeddings are used.) University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models 175 dimensional LSTM layer, 0.5 dropout rate 50 dimensional dense layer Convolution of window size 2,3,4 Pooling of length 2 (Twitter specific embeddings are used.) University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-11
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-11
Models for t00
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-12
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-12
Annotation Aggregation Methods BiLSTM
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th, 2017
[]
GEM-SciDuet-train-84#paper-1219#slide-13
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-13
Performance vs Frequency
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
Motivation Annotation Process and Analysis Baseline Models University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
[]
GEM-SciDuet-train-84#paper-1219#slide-14
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-14
Conclusion and Summary
Motivation Annotation Process and Analysis Baseline Models Multi-label emotion annotation is a challenging task We publish all annotations Aggregation by disjunction leads to annotation which can better be modeled computationally Linear and neural models perform similarly well University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
Motivation Annotation Process and Analysis Baseline Models Multi-label emotion annotation is a challenging task We publish all annotations Aggregation by disjunction leads to annotation which can better be modeled computationally Linear and neural models perform similarly well University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
[]
GEM-SciDuet-train-84#paper-1219#slide-15
1219
Annotation, Modelling and Analysis of Fine-Grained Emotions on a Stance and Sentiment Detection Corpus
There is a rich variety of data sets for sentiment analysis (viz., polarity and subjectivity classification). For the more challenging task of detecting discrete emotions following the definitions of Ekman and Plutchik, however, there are much fewer data sets, and notably no resources for the social media domain. This paper contributes to closing this gap by extending the SemEval 2016 stance and sentiment dataset with emotion annotation. We (a) analyse annotation reliability and annotation merging; (b) investigate the relation between emotion annotation and the other annotation layers (stance, sentiment); (c) report modelling results as a baseline for future work.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178 ], "paper_content_text": [ "Introduction Emotion recognition is a research area in natural language processing concerned with associating words, phrases or documents with predefined emotions from psychological models.", "Discrete emotion recognition assigns categorial emotions (Ekman, 1999; Plutchik, 2001) , namely Anger, Anticipation, Disgust, Fear, Joy, Sadness, Surprise und Trust.", "Compared to the very active area of sentiment analysis, whose goal is to recognize the polarity of text (e. g., positive, negative, neutral, mixed), few resources are available for discrete emotion analysis.", "Emotion analysis has been applied to several domains, including tales (Alm et al., 2005) , blogs (Aman and Szpakowicz, 2007) and microblogs (Dodds et al., 2011) .", "The latter in particular provides a major data source in the form of user messages from platforms such as Twitter (Costa et al., * We thank Marcus Hepting, Chris Krauter, Jonas Vogelsang, Gisela Kollotzek for annotation and discussion.", "2014) which contain semi-structured information (hashtags, emoticons, emojis) that can be used as weak supervision for training classifiers (Suttles and Ide, 2013) .", "The classifier then learns the association of all other words in the message with the \"self-labeled\" emotion (Wang et al., 2012) .", "While this approach provides a practically feasible approximation of emotions, there is no publicly available, manually vetted data set for Twitter emotions that would support accurate and comparable evaluations.", "In addition, it has been shown that distant annotation is conceptually different from manual annotation for sentiment and emotion (Purver and Battersby, 2012) .", "With this paper, we contribute manual emotion annotation for a publicly available Twitter data set.", "We annotate the SemEval 2016 Stance Data set (Mohammad et al., 2016) which provides sentiment and stance information and is popular in the research community (Augenstein et al., 2016; Wei et al., 2016; Dias and Becker, 2016; Ebrahimi et al., 2016) .", "It therefore enables further research on the relations between sentiment, emotions, and stances.", "For instance, if the distribution of subclasses of positive or negative emotions is different for against and in-favor, emotion-based features could contribute to stance detection.", "An additional feature of our resource is that we do not only provide a \"majority annotation\" as is usual.", "We do define a well-performing aggregated annotation, but additionally provide the individual labels of each of our six annotators.", "This enables further research on differences in the perception of emotions.", "Background and Related Work For a review of the fundaments of emotion and sentiment and the differences between these concepts, we refer the reader to Munezero et al.", "(2014) .", "For sentiment analysis, a large number of annotated data sets exists.", "These include review texts from different domains, for instance from Amazon and other shopping sites (Hu and Liu, 2004; Ding et al., 2008; Toprak et al., 2010; Lakkaraju et al., 2011) , restaurants (Ganu et al., 2009) , news articles (Wiebe et al., 2005) , blogs (Kessler et al., 2010) , as well as microposts on Twitter.", "For the latter, shown in the upper half of Table 1 , there are general corpora (Nakov et al., 2013; Spina et al., 2012; Thelwall et al., 2012) as well as ones focused on very specific subdomains, for instance on Obama-McCain Debates (Shamma et al., 2009) , Health Care Reforms (Speriosu et al., 2011) .", "A popular example for a manually annotated corpus for sentiment, which includes stance annotation for a set of topics is the SemEval 2016 data set (Mohammad et al., 2016) .", "For emotion analysis, the set of annotated resources is smaller (compare the lower half of Table 1).", "A very early resource is the ISEAR data set (Scherer and Wallbott, 1997) A notable gap is the unavailability of a publicly available set of microposts (e. g., tweets) with emotion labels.", "To the best of our knowledge, there are only three previous approaches to labeling tweets with discrete emotion labels.", "One is the recent data set on for emotion intensity estimation, a shared task aiming at the development of a regression model.", "The goal is not to predict the emotion class, but a distribution over their intensities, and the set of emotions is limited to fear, sadness, anger, and joy (Mohammad and Bravo-Marquez, 2017) .", "Most similar to our work is a study by Roberts et al.", "(2012) which annotated 7,000 tweets manually for 7 emotions (anger, disgust, fear, joy, love, sadness and surprise).", "They chose 14 topics which they believe should elicit emotional tweets and collect hashtags to help identify tweets that are on these topics.", "After several iterations, the annotators reached κ = 0.67 inter-annotator agreement on 500 tweets.", "Unfortunately, the data appear not to be available any more.", "An additional limitation of that dataset was that 5,000 of the 7,000 tweets were annotated by one annotator only.", "In contrast, we provide several annotations for each tweet.", "Mohammad et al.", "(2015) annotated electoral tweets for sentiment, intensity, semantic roles, style, purpose and emotions.", "This is the only available corpus similar to our work we are aware of.", "However, the focus of this work was not emotion annotation in contrast to ours.", "In addition, we publish the data of all annotators.", "Corpus Annotation and Analysis Annotation Procedure As motivated above, we re-annotate the extended SemEval 2016 Stance Data set (Mohammad et al., 2016) which consists of 4,870 tweets (a subset of which was used in the SemEval competition).", "For a discussion of the differences of these data sets, we refer to .", "We omit two tweets with special characters, which leads to an overall set of 4,868 tweets used in our corpus.", "1 We frame annotation as a multi-label classification task at the tweet level.", "The tweets were annotated by a group of six independent annotators, with a minimum number of three annotations for each tweet (696 tweets were labeled by 6 annotators, 703 by 5 annotators, 2,776 by 4 annotators and 693 by 3 annotators).", "All annotators were undergraduate students of media computer science and between the age of 20 and 30.", "Only one annotator is female.", "All students are German native speak-1 Our annotations and original tweets are available at http://www.ims.uni-stuttgart.de/data/ ssec and http://alt.qcri.org/semeval2016/ task6/data/uploads/stancedataset.zip, see also http://alt.qcri.org/semeval2016/task6.", "To train the annotators on the task, we performed two training iterations based on 50 randomly selected tweets from the SemEval 2016 Task 4 corpus (Nakov et al., 2016) .", "After each iteration, we discussed annotation differences (informally) in face-to-face meetings.", "For the final annotation, tweets were presented to the annotators in a web interface which paired a tweet with a set of binary check boxes, one for each emotion.", "Taggers could annotate any set of emotions.", "Each annotator was assigned with 5/7 of the corpus with equally-sized overlap of instances based on an offset shift.", "Not all annotators finished their task.", "2 Emotion Annotation Reliability and Aggregated Annotation Our annotation represents a middle ground between traditional linguistic \"expert\" annotation and crowdsourcing: We assume that intuitions about emotions diverge more than for linguistic structures.", "At the same time, we feel that there is information in the individual annotations beyond the simple \"majority vote\" computed by most crowdsourcing studies.", "In this section, we analyse the annotations intrinsically; a modelling-based evaluation follows in Section 5.", "Our first analysis, shown in Table 2 , compares annotation strata with different agreement.", "For example, the column labeled 0.0 lists the frequencies of emotion labels assigned by at least one annotator, a high recall annotation.", "In contrast, the column labeled 0.99 lists frequencies for emotion labels that all annotators agreed on.", "This represents a high These numbers confirm that emotion labeling is a somewhat subjective task: only a small subset of the emotions labeled by at least one annotator (t=0.0) is labeled by most (t=0.66) or all of them (t=0.99).", "Interestingly, the exact percentage varies substantially by emotion, between 2 % for sadness and 20 % for anger.", "Many of these disagreements stem from tweets that are genuinely difficult to categorize emotionally, like That moment when Canadians realised global warming doesn't equal a tropical vacation for which one annotator chose anger and sadness, while one annotator chose surprise.", "Arguably, both annotations capture aspects of the meaning.", "Similarly, the tweet 2 pretty sisters are dancing with cancered kid (a reference to an online video) is marked as fear and sadness by one annotator and with joy and sadness by another.", "Naturally, not all differences arise from justified annotations.", "For instance the tweet #BIBLE = Big Irrelevant Book of Lies and Exaggerations has been labeled by two annotators with the emotion trust, presumably because of the word bible.", "This appears to be a classical oversight error, where the tweet is labeled on the basis of the first spotted keyword, without substantially studying its content.", "To quantify these observations, we follow general practice and compute a chance-corrected measure of inter-annotator agreement.", "Table 3 shows the minimum and maximum Cohen's κ values for pairs of annotators, computed on the intersection of instances annotated by either annotator within each pair.", "We obtain relatively high κ values of anger, joy, and trust, but lower values for the other emotions.", "These small κ values could be interpreted as indicators of problems with reliability.", "However, κ is notoriously difficult to interpret, and a number of studies have pointed out the influence of marginal frequencies (Cicchetti and Feinstein, 1990) : In the presence of skewed marginals (and most of our emotion labels are quite rare, cf.", "To avoid these methodological problems, we assess the usefulness of our annotation extrinsically by comparing the performance of computational models for different values of t. In a nutshell, these experiments will show best results t=0.0, i. e., the Table 5 : Tweet Counts (above diagonal) and odds ratio (below diagonal) for cooccurring annotations for all classes in the corpus (emotions based on majority annotation, t=0.5).", "high-recall annotation (see Section 5 for details).", "We therefore define t=0.0 as our aggregated annotation.", "For comparison, we also consider t=0.5, which corresponds to the majority annotation as generally adopted in crowdsourcing studies.", "Distribution of Emotions As shown in Table 2 , nearly 60 % of the overall tweet set are annotated with anger by at least one annotator.", "This is the predominant emotion class, followed by anticipation and sadness.", "This distribution is comparably uncommon and originates from the selection of tweets in SemEval as a stance data set.", "However, while anger clearly dominates in the aggregated annotation, its predominance weakens for the more precision-oriented data sets.", "For t=0.99, joy becomes the second most frequent emotion.", "In uniform samples from Twitter, joy typically dominates the distribution of emotions (Klinger, 2017) .", "It remains a question for future work how to reconciliate these observations.", "Table 4 shows the number of cooccurring label pairs (above the diagonal) and the odds ratios (below the diagonal) for emotion, stance, and sentiment annotations on the whole corpus for our aggregated annotation (t=0.0).", "Odds ratio is Emotion vs. other Annotation Layers R(A:B) = P (A)(1 − P (B)) P (B)(1 − P (A)) , where P (A) is the probability that both labels (at row and column in the table) hold for a tweet and P (B) is the probability that only one holds.", "A ratio of x means that the joint labeling is x times more likely than the independent labeling.", "Table 5 shows the same numbers for the majority annotation, t=0.5.", "We first analyze the relationship between emotions and sentiment polarity in Table 4 .", "For many emotions, the polarity is as expected: Joy and trust occur predominantly with positive sentiment, and anger, disgust, fear and sadness with negative sentiment.", "The emotions anticipation and surprise are, in comparison, most balanced between polarities, however with a majority for positive sentiment in anticipation and a negative sentiment for surprise.", "For most emotions there is also a non-negligible number of tweets with the sentiment opposite to a common expectation.", "For example, anger occurs 28 times with positive sentiment, mainly tweets which call for (positive) change regarding a controversial topic, for instance Lets take back our country!", "Whos with me?", "No more Democrats!2016 Why criticise religions?", "If a path is not your own.", "Don't be pretentious.", "And get down from your throne.", "Conversely, more than 15 % of the joy tweets carry negative sentiment.", "These are often cases in which either the emotion annotator or the sentiment annotator assumed some non-literal meaning to be associated with the text (mainly irony), for instance Global Warming!", "Global Warming!", "Global Warming!", "Oh wait, it's summer.", "I love the smell of Hillary in the morning.", "It smells like Republican Victory.", "Disgust occurs almost exclusively with negative sentiment.", "For the majority annotation (Table 5) , the number of annotations is smaller.", "However, the average size of the odds ratios increase (from 1.96 for t=0.0 to 5.39 for t=0.5).", "A drastic example is disgust in combination with negative sentiment, the predominant combination.", "Disgust is only labeled once with positive sentiment in the t=0.5 annotation: #WeNeedFeminism because #NoMeansNo it doesnt mean yes, it doesnt mean try harder!", "Similarly, the odds ratio for the combination anger and negative sentiment nearly doubles from 20.3 for t=0.0 to 41.47 for t=0.5.", "These numbers are an effect of the majority annotation having a higher precision in contrast to more \"noisy\" aggregation of all annotations (t=0.0).", "Regarding the relationship between emotions and stance, most odds ratios are relatively close to 1, indicating the absence of very strong correlations.", "Nevertheless, the \"Against\" stance is associated with a number of negative emotions (anger, disgust, sadness, the \"In Favor\" stance with joy, trust, and anticipation, and \"None\" with an absence of all emotions except surprise.", "Models We apply six standard models to provide baseline results for our corpus: Maximum Entropy (MAXENT), Support Vector Machines (SVM), a Long-Short Term Memory Network (LSTM), a Bidirectional LSTM (BI-LSTM), and a Convolutional Neural Network (CNN).", "MaxEnt and SVM classify each tweet separately based on a bag-of-words.", "For the first, the linear separator is estimated based on log-likelihood optimization with an L2 prior.", "For the second, the optimization follows a max-margin strategy.", "LSTM (Hochreiter and Schmidhuber, 1997 ) is a recurrent neural network architecture which includes a memory state capable of learning long distance dependencies.", "In various forms, they have proven useful for text classification tasks (Tai et al., 2015; Tang et al., 2016) .", "We implement a standard LSTM which has an embedding layer that maps the input (padded when needed) to a 300 dimensional vector.", "These vectors then pass to a 175 dimensional LSTM layer.", "We feed the final hidden state to a fully-connected 50-dimensional dense layer and use sigmoid to gate our 8 output neurons.", "As a regularizer, we use a dropout (Srivastava et al., 2014) of 0.5 before the LSTM layer.", "Bi-LSTM has the same architecture as the normal LSTM, but includes an additional layer with a reverse direction.", "This approach has produced stateof-the-art results for POS-tagging (Plank et al., 2016) , dependency parsing (Kiperwasser and Goldberg, 2016 ) and text classification (Zhou et al., 2016) , among others.", "We use the same parameters as the LSTM, but concatenate the two hidden layers before passing them to the dense layer.", "CNN has proven remarkably effective for text classification (Kim, 2014; dos Santos and Gatti, 2014; Flekova and Gurevych, 2016) .", "We train a simple one-layer CNN with one convolutional layer on top of pre-trained word embeddings, following Kim (2014) .", "The first layer is an embeddings layer that maps the input of length n (padded when needed) to an n x 300 dimensional matrix.", "The embedding matrix is then convoluted with filter sizes of 2, 3, and 4, followed by a pooling layer of length 2.", "This is then fed to a fully connected dense layer with ReLu activations and finally to the 8 output neurons, which are gated with the sigmoid function.", "We again use dropout (0.5), this time before and after the convolutional layers.", "For all neural models, we initialize our word representations with the skip-gram algorithm with negative sampling (Mikolov et al., 2013) , trained on nearly 8 million tokens taken from tweets collected using various hashtags.", "We create 300-dimensional vectors with window size 5, 15 negative samples and run 5 iterations.", "For OOV words, we use a vector initialized randomly between -0.25 and 0.25 to approximate the variance of the pretrained vectors.", "We train our models using ADAM (Kingma and Ba, 2015) and a minibatch size of 32.", "We set 10 % of Table 6 : Results of linear and neural models for labels from the aggregated annotation (t=0.0).", "For the neural models, we report the average of five runs and standard deviation in brackets.", "Best F 1 for each emotion shown in boldface.", "the training data aside to tune the hyperparameters for each model (hidden dimension size, dropout rate, and number of training epochs).", "Table 6 shows the results for our canonical annotation aggregation with t=0.0 (aggregated annotation) for our models.", "The two linear classifiers (trained as MAXENT and SVM) show comparable results, with an overall micro-average F 1 of 58 %.", "All neural network approaches show a higher performance of at least 2 percentage points (3 pp for LSTM, 4 pp for BI-LSTM, 2 pp for CNN).", "BI-LSTM also obtains the best F-Score for 5 of the 8 emotions (4 out of 8 for LSTM and CNN).", "We conclude that the BI-LSTM shows the best results of all our models.", "Our discussion focuses on this model.", "The performance clearly differs between emotion classes.", "Recall from Section 3.2 that anger, joy and trust showed much higher agreement numbers than the other annotations.", "There is however just a mild correlation between reliability and modeling performance.", "Anger is indeed modelled very well: it shows the best prediction performance with a similar precision and recall on all models.", "We ascribe this to it being the most frequent emotion class.", "In contrast, joy and trust show only middling performance, while we see relatively good results for anticipation and sadness even though there was considerable disagreement between annotators.", "We find the overall worst results for surprise.", "This is not surprising, surprise being a scarce label with also very low agreement.", "This might point towards underlying problems in the definition of surprise as an emotion.", "Some authors have split this class into positive and negative surprise in an attempt to avoid this (Alm et al., 2005) .", "Results We finally come to our justification for choosing t=0.0 as our aggregated annotation.", "Table 7 shows results for the best model (BI-LSTM) on the datasets for different thresholds.", "We see a clear downward monotone trend: The higher the threshold, the lower the F 1 measures.", "We obtain the best results, both for individual emotions and at the average level, for t=0.0.", "This is at least partially counterintuitive -we would have expected a dataset with \"more consensual\" annotation to yield better models -or at least models with higher precision.", "This is not the case.", "Our interpretation is that frequency effects outweigh any other considerations: As Table 2 shows, the amount of labeled data points drops sharply with higher thresholds: even between t=0.0 and t=0.33, on average half of the labels are lost.", "This interpretation is supported by the behavior of the individual emotions: for emotions where the data sets shrink gradually (anger, joy), performance drops gradually, while it dips sharply for emotions where the data sets shrink fast (disgust, fear).", "Somewhat surprisingly, therefore, we conclude that t=0.0 appears to be the Table 7 : Results of the BiLSTM for different voting thresholds.", "We report average results for each emotion over 5 runs (standard deviations are included in parenthesis).", "most useful datasets from a computational modeling perspective.", "In terms of how to deal with diverging annotations, we believe that this result bolsters our general approach to pay attention to individual annotators' labels rather than just majority votes: if the individual labels were predominantly noisy, we would not expect to see relatively high F 1 scores.", "Conclusion and Future Work With this paper, we publish the first manual emotion annotation for a publicly available micropost corpus.", "The resource we chose to annotate already provides stance and sentiment information.", "We analyzed the relationships among emotion classes and between emotions and the other annotation layers.", "In addition to the data set, we implemented wellknown standard models which are established for sentiment and polarity prediction for emotion classification.", "The BI-LSTM model outperforms all other approaches by up to 4 points F 1 on average compared to linear classifiers.", "Inter-annotator analysis showed a limited agreement between the annotators -the task is, at least to some degree, driven by subjective opinions.", "We found, however, that this is not necessarily a problem: Our models perform best on a high-recall aggregate annotation which includes all labels assigned by at least one annotator.", "Thus, we believe that the individual labels have value and are not, like generally assumed in crowdsourcing, noisy inputs suitable only as input for majority voting.", "In this vein, we publish all individual annotations.", "This enables further research on other methods of defining consensus annotations which may be more appropriate for specific downstream tasks.", "More generally, we will make all annotations, resources and model implementations publicly available." ] }
{ "paper_header_number": [ "1", "2", "3.1", "3.2", "3.3", "3.4", "4", "5", "6" ], "paper_header_content": [ "Introduction", "Background and Related Work", "Annotation Procedure", "Emotion Annotation Reliability and Aggregated Annotation", "Distribution of Emotions", "Emotion vs. other Annotation Layers", "Models", "Results", "Conclusion and Future Work" ] }
GEM-SciDuet-train-84#paper-1219#slide-15
Future Work
Motivation Annotation Process and Analysis Baseline Models Develop models which take into account label interactions explicitly Deeper linguistic analysis of annotation properties University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
Motivation Annotation Process and Analysis Baseline Models Develop models which take into account label interactions explicitly Deeper linguistic analysis of annotation properties University of Stuttgart Schuff, Barnes, Mohme, Pado, Klinger September 8th
[]
GEM-SciDuet-train-85#paper-1220#slide-1
1220
NAVER Machine Translation System for WAT 2015
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123 ], "paper_content_text": [ "Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .", "We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).", "Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).", "We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.", "We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .", "We give detailed explanations of each SMT system in section 2 and section 3.", "We describe our NMT model in section 4.", "2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.", "We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.", "We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We filtered out the sentences that have 100 or more tokens from training data.", "Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.", "We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.", "After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.", "Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.", "As a result, we chose the tree-to-string syntax-based model.", "The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.", "The tree-to-string model was proposed by Huang (2006) and Liu (2006) .", "It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.", "The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.", "We use synchronous context free grammar (SCFG) rules.", "In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .", "Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .", "Thus it is required to augment tree-to-string translation rules.", "The rule augmentation method allows the training system to extract more rules by modifying parse trees.", "Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.", "NP+VP.", "We limit the maximum span of each rule to 40 tokens in the rule extraction process.", "The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .", "Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.", "The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.", "The latter is to automatically detect and correct spell errors in an input sentence.", "We give a detailed description of spell error correction in section 2.4.1.", "English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.", "We discovered a lot of spell errors among OOV words that appear in English scientific text.", "We introduce English spell correction for reducing OOV words in input sentences.", "We developed our spell corrector by using Aspell 2 .", "For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.", "Then we regard words detected by Aspell as spell error words.", "For correcting spell error, we use only top-3 suggestion words from Aspell.", "We find that a large gap between an original word and its suggestion word makes wrong correction.", "To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.", "We also used Japanese part of the corpus for training the 5-gram language model.", "We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We did not filter out any sentences.", "Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.", "We used Juman 4 for tokenizing a Japanese sentence.", "We did not perform part-of-speech tagging for both languages.", "Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.", "For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.", "We use word-level tokenization for the word-based system.", "We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.", "We use the 5-gram language model.", "We use character-level tokenization for character-based system.", "We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.", "We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.", "The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.", "Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.", "It generally produces better translations than a table-based transliteration.", "Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.", "We also investigated a parentheses imbalance problem.", "We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.", "We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.", "We do not use the step for final submission.", "To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).", "If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.", "Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).", "An NMT system is a single neural network that reads a source sentence and generates its translation.", "Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.", "NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.", "First, NMT uses minimal domain knowledge.", "Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.", "Third, the NMT system removes the need to store explicit phrase tables and language models.", "Lastly, the decoder of an NMT system is easy to implement.", "Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.", "In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.", "The proposed approach removes the need to replace rare words with the unknown word symbol.", "Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .", "Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.", "(2015) .", "The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .", "At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.", ".", ".", ",h T ) with the alignment weights α 1 ,.", ".", ".", ",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).", "A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.", "Settings We constructed the source word vocabulary with the most common words in the source language corpora.", "For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.", "The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.", "The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.", "We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.", "Each model was optimized using stochastic gradient descent (SGD).", "We did not use dropout.", "Training was early-stopped to maximize the performance on the development set measured by BLEU.", "Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.", "Table 1 shows the evaluation results of our En-Ja traditional SMT system.", "The first row in the table indicates the baseline of the tree-to-string systaxbased model.", "The second row shows the system that reflects the tree modification described in section 2.3.", "The augmentation method drastically increased both the number of rules and the BLEU score.", "Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.", "Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .", "Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.", "We obtained the best result in the combination of two phrase-based SMT systems.", "Table 3 shows effects of our NMT model.", "\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.", "In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.", "\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.", "The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.", "Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.", "The final output is 1best translation selected by considering only NMT score.", "En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.", "This result means that NMT produces a strong effect in the language pair with long linguistic distance.", "Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.", "From the human evaluation, we can be clear that our NMT model produces successful results.", "Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.", "We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.", "Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.", "For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.", "We also plan to develop and evaluate the NMT system in other language pairs." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.4.1", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Training data", "Language Analyzer", "Tree-to-string Syntax-based SMT", "Handling Out-of-Vocabulary", "English Spell Correction", "Training data", "Language Analyzer", "Phrase-based SMT", "Neural Machine Translation", "Model", "Settings", "Experimental Results", "NMT", "Conclusion" ] }
GEM-SciDuet-train-85#paper-1220#slide-1
Traditional SMT and Neural MT
Traditional SMT Traditional SMT + Neural Network Neural MT Target Sentence Target Sentence Target Sentence Target Sentence a few year ago recently more recently
Traditional SMT Traditional SMT + Neural Network Neural MT Target Sentence Target Sentence Target Sentence Target Sentence a few year ago recently more recently
[]
GEM-SciDuet-train-85#paper-1220#slide-2
1220
NAVER Machine Translation System for WAT 2015
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123 ], "paper_content_text": [ "Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .", "We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).", "Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).", "We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.", "We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .", "We give detailed explanations of each SMT system in section 2 and section 3.", "We describe our NMT model in section 4.", "2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.", "We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.", "We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We filtered out the sentences that have 100 or more tokens from training data.", "Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.", "We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.", "After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.", "Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.", "As a result, we chose the tree-to-string syntax-based model.", "The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.", "The tree-to-string model was proposed by Huang (2006) and Liu (2006) .", "It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.", "The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.", "We use synchronous context free grammar (SCFG) rules.", "In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .", "Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .", "Thus it is required to augment tree-to-string translation rules.", "The rule augmentation method allows the training system to extract more rules by modifying parse trees.", "Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.", "NP+VP.", "We limit the maximum span of each rule to 40 tokens in the rule extraction process.", "The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .", "Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.", "The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.", "The latter is to automatically detect and correct spell errors in an input sentence.", "We give a detailed description of spell error correction in section 2.4.1.", "English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.", "We discovered a lot of spell errors among OOV words that appear in English scientific text.", "We introduce English spell correction for reducing OOV words in input sentences.", "We developed our spell corrector by using Aspell 2 .", "For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.", "Then we regard words detected by Aspell as spell error words.", "For correcting spell error, we use only top-3 suggestion words from Aspell.", "We find that a large gap between an original word and its suggestion word makes wrong correction.", "To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.", "We also used Japanese part of the corpus for training the 5-gram language model.", "We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We did not filter out any sentences.", "Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.", "We used Juman 4 for tokenizing a Japanese sentence.", "We did not perform part-of-speech tagging for both languages.", "Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.", "For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.", "We use word-level tokenization for the word-based system.", "We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.", "We use the 5-gram language model.", "We use character-level tokenization for character-based system.", "We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.", "We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.", "The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.", "Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.", "It generally produces better translations than a table-based transliteration.", "Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.", "We also investigated a parentheses imbalance problem.", "We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.", "We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.", "We do not use the step for final submission.", "To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).", "If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.", "Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).", "An NMT system is a single neural network that reads a source sentence and generates its translation.", "Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.", "NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.", "First, NMT uses minimal domain knowledge.", "Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.", "Third, the NMT system removes the need to store explicit phrase tables and language models.", "Lastly, the decoder of an NMT system is easy to implement.", "Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.", "In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.", "The proposed approach removes the need to replace rare words with the unknown word symbol.", "Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .", "Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.", "(2015) .", "The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .", "At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.", ".", ".", ",h T ) with the alignment weights α 1 ,.", ".", ".", ",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).", "A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.", "Settings We constructed the source word vocabulary with the most common words in the source language corpora.", "For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.", "The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.", "The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.", "We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.", "Each model was optimized using stochastic gradient descent (SGD).", "We did not use dropout.", "Training was early-stopped to maximize the performance on the development set measured by BLEU.", "Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.", "Table 1 shows the evaluation results of our En-Ja traditional SMT system.", "The first row in the table indicates the baseline of the tree-to-string systaxbased model.", "The second row shows the system that reflects the tree modification described in section 2.3.", "The augmentation method drastically increased both the number of rules and the BLEU score.", "Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.", "Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .", "Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.", "We obtained the best result in the combination of two phrase-based SMT systems.", "Table 3 shows effects of our NMT model.", "\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.", "In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.", "\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.", "The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.", "Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.", "The final output is 1best translation selected by considering only NMT score.", "En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.", "This result means that NMT produces a strong effect in the language pair with long linguistic distance.", "Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.", "From the human evaluation, we can be clear that our NMT model produces successful results.", "Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.", "We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.", "Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.", "For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.", "We also plan to develop and evaluate the NMT system in other language pairs." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.4.1", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Training data", "Language Analyzer", "Tree-to-string Syntax-based SMT", "Handling Out-of-Vocabulary", "English Spell Correction", "Training data", "Language Analyzer", "Phrase-based SMT", "Neural Machine Translation", "Model", "Settings", "Experimental Results", "NMT", "Conclusion" ] }
GEM-SciDuet-train-85#paper-1220#slide-2
Neural Machine Translation
Proposed by Google and Montreal University in 2014 Input sentence is encoded into fix-length vector, and from the vector translated sentence is produced. Thats all Various extensions is emerged LSTM, GRU, Bidirectional Encoding, Attention Mechanism, RNN using attention mechanism [Bahdanau, 2015] Size of recurrent unit Optimization Stochastic gradient descent(SGD) Time of training 10 days (4 epoch)
Proposed by Google and Montreal University in 2014 Input sentence is encoded into fix-length vector, and from the vector translated sentence is produced. Thats all Various extensions is emerged LSTM, GRU, Bidirectional Encoding, Attention Mechanism, RNN using attention mechanism [Bahdanau, 2015] Size of recurrent unit Optimization Stochastic gradient descent(SGD) Time of training 10 days (4 epoch)
[]
GEM-SciDuet-train-85#paper-1220#slide-3
1220
NAVER Machine Translation System for WAT 2015
In this paper, we describe NAVER machine translation system for English to Japanese and Korean to Japanese tasks at WAT 2015. We combine the traditional SMT and neural MT in both tasks.
{ "paper_content_id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123 ], "paper_content_text": [ "Introduction This paper explains the NAVER machine translation system for the 2nd Workshop on Asian Translation (WAT 2015) (Nakazawa et al., 2015) .", "We participate in two tasks; English to Japanese (En-Ja) and Korean to Japanese (Ko-Ja).", "Our system is a combined system of traditional statistical machine translation (SMT) and neural machine translation (NMT).", "We adopt the tree-tostring syntax-based model as En-Ja SMT baseline, while we adopt the phrase-based model as Ko-Ja.", "We propose improved SMT systems for each task and an NMT model based on the architecture using recurrent neural network (RNN) (Cho et al., 2014; Sutskever et al., 2014) .", "We give detailed explanations of each SMT system in section 2 and section 3.", "We describe our NMT model in section 4.", "2 English to Japanese Training data We used 1 million sentence pairs that are contained in train-1.txt of ASPEC-JE corpus for training the translation rule tables and NMT models.", "We also used 3 million Japanese sentences that are contained in train-1.txt, train-2.txt,train-3.txt of ASPEC-JE corpus for training the 5-gram language model.", "We also used 1,790 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We filtered out the sentences that have 100 or more tokens from training data.", "Language Analyzer We used Moses tokenizer and Berkeley constituency parser 1 for tokenizing and parsing an English sentence.", "We used our own Japanese tokenizer and part-of-speech tagger for tokenizing and tagging a Japanese sentence.", "After running the tokenizer and the tagger, we make a token from concatenation of a word and its part-of-speech.", "Tree-to-string Syntax-based SMT To determining the baseline model, we first performed comparative experiments with the phrasebased, hierarchical phrase-based and syntax-based models.", "As a result, we chose the tree-to-string syntax-based model.", "The SMT models that consider source syntax such as tree-to-string and forest-to-string brought out better performance than the phrase-based and hierarchical phrase-based models in the WAT 2014 En-Ja task.", "The tree-to-string model was proposed by Huang (2006) and Liu (2006) .", "It utilizes the constituency tree of source language to extract translation rules and decode a target sentence.", "The translation rules are extracted from a source-parsed and word-aligned corpus in the training step.", "We use synchronous context free grammar (SCFG) rules.", "In addition, we used a rule augmentation method which is known as syntax-augmented machine translation (Zollmann and Venugopal, 2006) .", "Because the tree-to-string SMT makes some constraints on extracting rules by considering syntactic tree structures, it usually extracts fewer rules than hierarchical phrase-based SMT (HPBSMT) (Chiang, 2005) .", "Thus it is required to augment tree-to-string translation rules.", "The rule augmentation method allows the training system to extract more rules by modifying parse trees.", "Given a parse tree, we produce additional nodes by combining any pairs of neighboring nodes, not only children nodes, e.g.", "NP+VP.", "We limit the maximum span of each rule to 40 tokens in the rule extraction process.", "The tree-to-string decoder use a chart parsing algorithm with cube pruning proposed by Chiang (2005) .", "Our En-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights • Cube-pruning-pop-limit = 3000 Handling Out-of-Vocabulary In order to handle out-of-vocabulary (OOV) words, we use two techniques; hyphen word split and spell error correction.", "The former is to split a word with hyphen (-) to two separate tokens before running the language analyzer.", "The latter is to automatically detect and correct spell errors in an input sentence.", "We give a detailed description of spell error correction in section 2.4.1.", "English Spell Correction It is not easy to translate a word including errata, because the erroneous word has only a slim chance of occurrence in the training data.", "We discovered a lot of spell errors among OOV words that appear in English scientific text.", "We introduce English spell correction for reducing OOV words in input sentences.", "We developed our spell corrector by using Aspell 2 .", "For detecting a spell error, we skip words that have only capitals, numbers or symbols, because they are likely to be abbreviations or mathematic expressions.", "Then we regard words detected by Aspell as spell error words.", "For correcting spell error, we use only top-3 suggestion words from Aspell.", "We find that a large gap between an original word and its suggestion word makes wrong correction.", "To avoid excessive correction, we introduce a gap thresholding technique, that ignores the suggestion word that has 3 or longer edit distance and selects one that has 3 Korean to Japanese Training data We used 1 million sentence pairs that are contained in JPO corpus for training phrase tables and NMT models.", "We also used Japanese part of the corpus for training the 5-gram language model.", "We also used 2,000 sentence pairs of dev.txt for tuning the weights of each feature of SMT linear model and as validation data of neural network.", "We did not filter out any sentences.", "Language Analyzer We used MeCab-ko 3 for tokenizing a Korean sentence.", "We used Juman 4 for tokenizing a Japanese sentence.", "We did not perform part-of-speech tagging for both languages.", "Phrase-based SMT As in the En-Ja task, we first performed comparative experiments with the phrase-based and hierarchical phrase-based models, and then adopt the phrase-based model as our baseline model.", "For the Ko-Ja task, we develop two phrasebased systems; word-level and character-level.", "We use word-level tokenization for the word-based system.", "We found that setting the distortion limit to zero yields better translation in aspect of both BLEU and human evaluation.", "We use the 5-gram language model.", "We use character-level tokenization for character-based system.", "We use the 10gram language model and set the maximum phrase length to 10 in the phrase pair extraction process.", "We found that the character-level system does not suffer from tokenization error and out-ofvocabulary issue.", "The JPO corpus contains many technical terms and loanwords like chemical compound names, which are more inaccurately tokenized and allow a lot of out-of-vocabulary tokens to be generated.", "Since Korean and Japanese share similar transliteration rules for loanwords, the character-level system can learn translation of unseen technical words.", "It generally produces better translations than a table-based transliteration.", "Moreover, we tested jamo-level tokenization 5 for Korean text, however, the preliminary test did not produce effective results.", "We also investigated a parentheses imbalance problem.", "We solved the problem by filtering out parentheses-imbalanced translations from the nbest results.", "We found that the post-processing step can improve the BLEU score with low order language models, but cannot do with high order language models.", "We do not use the step for final submission.", "To boosting the performance, we combine the word-level phrase-based model (Word PB) and the character-level phrase-based model (Char PB).", "If there are one or more OOV words in an input sentence, our translator choose the Char PB model, otherwise, the Word PB model.", "Our Ko-Ja SMT system was developed by using the open source SMT engines; Moses and Giza++.", "Its other specifications are as follows: • Grow-diag-final-and word alignment heuristic • Good-Turing discounting for smoothing probabilities • Minimum Error Rate Training (MERT) for tuning feature weights Neural Machine Translation Neural machine translation (NMT) is a new approach to machine translation that has shown promising results compared to the existing approaches such as phrase-based statistical machine translation (Cho et al., 2014; Sutskever et al., 2014; Bahdanau et al., 2015 ).", "An NMT system is a single neural network that reads a source sentence and generates its translation.", "Using the bilingual corpus, the whole neural network is jointly trained to maximize the conditional probability of a correct translation given a source sentence.", "NMT has several advantages over the existing statistical machine translation systems such as the phrase-based system.", "First, NMT uses minimal domain knowledge.", "Second, the NMT system is jointly trained to maximize the translation performance, unlike the existing phrase-based system which consists of many separately trained features.", "Third, the NMT system removes the need to store explicit phrase tables and language models.", "Lastly, the decoder of an NMT system is easy to implement.", "Despite these advantages and promising results, NMT has a limitation in handling a larger target vocabulary, as the complexity of training and decoding increases proportionally to the number of target words.", "In this paper, we propose a new approach to avoid the large target vocabulary problem by preprocessing the target word sequences, encoding them as a longer character sequence drawn from a small character vocabulary.", "The proposed approach removes the need to replace rare words with the unknown word symbol.", "Our approach is simpler than other methods recently proposed to address the same issue (Luong et al., 2015; Jean et al., 2015) .", "Model In this paper, we use our in-house software of NMT that uses an attention mechanism, as recently proposed by Bahdanau et al.", "(2015) .", "The encoder of NMT is a bi-directional recurrent neural network such that h t = [ h t ; h t ] (1) h t = f GRU (W s we x t , h t+1 ) (2) h t = f GRU (W s we x t , h t−1 ) (3) where h t is a hidden state of the encoder, x t is a one-hot encoded vector indicating one of the words in the source vocabulary, W s we is a weight matrix for the word embedding of the source language, and f GRU is a gated recurrent unit (GRU) (Cho et al., 2014) .", "At each time, the decoder of NMT computes the context vector c t as a convex combination of the hidden states (h 1 ,.", ".", ".", ",h T ) with the alignment weights α 1 ,.", ".", ".", ",α T : c t = T i=1 α ti h i (4) α ti = exp(e tj ) T j=1 exp(e tj ) (5) e ti = f F F N N (z t−1 , h i , y t−1 ) (6) where f F F N N is a feedforward neural network with a single hidden layer, z t−1 is a previous hidden state of the decoder, and y t−1 is a previous generated target word (one-hot encoded vector).", "A new hidden state z t of the decoder which uses GRU is computed based on z t−1 , y t−1 , and c t : z t = f GRU (y t−1 , z t−1 , c t ) (7) The probability of the next target word y t is then computed by (8) p(y t |y <t , x) = y T t f sof tmax {W z y z t + W zy z t + W cy c t + W yy (W t we y t−1 ) + b y } z t = f ReLU (W zz z t ) (9) where f sof tmax is a softmax function, f ReLU is a rectified linear unit (ReLU), W t we is a weight matrix for the word embedding of the target language, and b y is a target word bias.", "Settings We constructed the source word vocabulary with the most common words in the source language corpora.", "For the target character vocabulary, we used a BI (begin/inside) representation (e.g., 結/B, 果/I), because it gave better accuracy in preliminary experiment.", "The sizes of the source vocabularies for English and Korean were 245K and 60K, respectively, for the En-Ja and Ko-Ja tasks.", "The sizes of the target character vocabularies for Japanese were 6K and 5K, respectively, for the En-Ja and Ko-Ja tasks.", "We chose the dimensionality of the source word embedding and the target character embedding to be 200, and chose the size of the recurrent units to be 1,000.", "Each model was optimized using stochastic gradient descent (SGD).", "We did not use dropout.", "Training was early-stopped to maximize the performance on the development set measured by BLEU.", "Experimental Results All scores of this section are reported in experiments on the official test data; test.txt of the ASPEC-JE corpus.", "Table 1 shows the evaluation results of our En-Ja traditional SMT system.", "The first row in the table indicates the baseline of the tree-to-string systaxbased model.", "The second row shows the system that reflects the tree modification described in section 2.3.", "The augmentation method drastically increased both the number of rules and the BLEU score.", "Our OOV handling methods described in The decoding time of the rule-augmented treeto-string SMT is about 1.3 seconds per a sentence in our 12-core machine.", "Even though it is not a terrible problem, we are required to improve the decoding speed by pruning the rule table or using the incremental decoding method (Huang and Mi, 2010) .", "Table 2 shows the evaluation results of our Ko-Ja traditional SMT system.", "We obtained the best result in the combination of two phrase-based SMT systems.", "Table 3 shows effects of our NMT model.", "\"Human\" indicates the pairwise crowdsourcing evaluation scores provided by WAT 2015 organizers.", "In the table, \"T2S/PBMT only\" is the final T2S/PBMT systems shown in section 5.1 and section 5.2.", "\"NMT only\" is the system using only RNN encoder-decoder without any traditional SMT methods.", "The last row is the combined system that reranks T2S/PBMT n-best translations by NMT.", "Our T2S/PBMT system outputs 100,000-best translations in En-Ja and 10,000best translations in Ko-Ja.", "The final output is 1best translation selected by considering only NMT score.", "En-Ja SMT Ko-Ja SMT NMT NMT outperforms the traditional SMT in En-Ja, while it does not in Ko-Ja.", "This result means that NMT produces a strong effect in the language pair with long linguistic distance.", "Moreover, the reranking system achieved a great synergy of T2S/PBMT and NMT in both task, even if \"NMT only\" is not effective in Ko-Ja.", "From the human evaluation, we can be clear that our NMT model produces successful results.", "Conclusion This paper described NAVER machine translation system for En-Ja and Ko-Ja tasks at WAT 2015.", "We developed both the traditional SMT and NMT systems and integrated NMT into the traditional SMT in both tasks by reranking n-best translations of the traditional SMT.", "Our evaluation results showed that a combination of the NMT and traditional SMT systems outperformed two independent systems.", "For the future work, we try to improve the space and time efficiency of both the tree-to-string SMT and the NMT model.", "We also plan to develop and evaluate the NMT system in other language pairs." ] }
{ "paper_header_number": [ "1", "2.1", "2.2", "2.3", "2.4", "2.4.1", "3.1", "3.2", "3.3", "4", "4.1", "4.2", "5", "5.3", "6" ], "paper_header_content": [ "Introduction", "Training data", "Language Analyzer", "Tree-to-string Syntax-based SMT", "Handling Out-of-Vocabulary", "English Spell Correction", "Training data", "Language Analyzer", "Phrase-based SMT", "Neural Machine Translation", "Model", "Settings", "Experimental Results", "NMT", "Conclusion" ] }
GEM-SciDuet-train-85#paper-1220#slide-3
Pros and Cons of NMT
no need domain knowledge no need to store explicit TM and LM Can jointly train multiple features Can implement decoder easily Is time consuming to train NMT model Is slow in decoding, if target vocab. is large Is weak to OOV problem Is difficult to debug
no need domain knowledge no need to store explicit TM and LM Can jointly train multiple features Can implement decoder easily Is time consuming to train NMT model Is slow in decoding, if target vocab. is large Is weak to OOV problem Is difficult to debug
[]