id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_22100
Furthermore, we claim that polysemous words change their meaning in different topic domains; this is reflected in rela-tive shifts of their distributional representations in different topic-based DSMs.
semantic anchors should have consistent semantic relationships regardless of the domain they reside in.
contrasting
train_22101
In particular, given the wordpair (w, w ), and their provided contexts (c, c ) we define: Following the notation used in 3.2, K is the number of topics returned by the trained LDA model, x j is the word embedding trained on the subcorpus corresponding to the j-th topic after being projected to the unified vector space, p(j|w, c) denotes the posterior probability of topic j returned by LDA given as input the context c of word w, d denotes the cosine similarity between the two input representations and finallyx (w) = u argmax 1≤j≤K p(j|w,c) (w) is the vector representation of word w that corresponds to the topic with the maximum posterior for c. Intuitively, a higher score in MaxSimC indicates the existence of more robust multi-topic word representations.
avgSimC provides a topic-based smoothed result across different embeddings.
contrasting
train_22102
The "pairs in resource" assessment is particularly antithetical to the spirit of RW, which is often employed to assess word vector coverage, and we admit that FN only contains 6.3% of the RW word pairs.
we would argue that there is an important difference between concluding that a semantic resource does not yield gains in retrofitting vs. concluding that the resource improves the quality of the vectors it covers.
contrasting
train_22103
The broad success of CWRs indicates that they encode useful, transferable features of language.
their linguistic knowledge and transferability are not yet well understood.
contrasting
train_22104
Our results confirm that task-trained contextualization is important when the end task requires specific information that may not be captured by the pretraining task ( §4).
such end-taskspecific contextualization can come from either fine-tuning CWRs or using fixed output features as inputs to a task-trained contextualizer; Peters et al.
contrasting
train_22105
However, the settings that achieve the highest performance for individual target tasks often involve transferring between related tasks (not shown in the syntactic dependency arc classification (EWT) task, we see the largest gains from pretraining on the task itself, but with a different dataset (PTB).
pretraining on syntactic dependency arc prediction (PTB), CCG supertagging, chunking, the ancestor prediction tasks, and semantic dependency arc classification all give better performance than bidirectional language model pretraining.
contrasting
train_22106
Prior work on combining language modeling and unsupervised tree learning typically embed soft, tree-like structures as hidden layers of a deep network (Cho et al., 2014;Chung et al., 2017;Shen et al., 2018Shen et al., , 2019.
buys and blunsom (2018) make Markov assumptions and perform exact marginalization over latent dependency 22 Many prior works that induce trees directly from words often employ additional heuristics based on punctuation (Seginer, 2007;Ponvert et al., 2011;Spitkovsky et al., 2013;Parikh et al., 2014), as punctuation (e.g.
contrasting
train_22107
Standard linguistic theories propose that natural language is structured as nested constituents organised in the form of a tree (Partee et al., 1990).
most popular models, such as the Long Sort-Term Memory network (LSTM) (Hochreiter and Schmidhuber, 1997), process text without imposing a grammatical structure.
contrasting
train_22108
As shown in Fig.1, our model has a little loss in accuracy as the length increases to ten times the maximum length seen during training.
we notice that final representations produced by the parser are very similar to each other.
contrasting
train_22109
Traditionally, supervised parsers trained on datasets such as the Penn Treebank (Marcus et al., 1993) are used to obtain syntactic trees.
the treebanks used to train these supervised parsers are typically small and restricted to the newswire domain.
contrasting
train_22110
It contains Wikipedia articles from a wide range of topics.
the CoNLL 2003 corpus consists of news articles.
contrasting
train_22111
Given an initial value c 0 c(0) ≥ 0 and a slope parameter r, we define: In this case, new training examples are constantly being introduced during the training process, with a constant rate r (as a proportion of the total number of available training examples).
note that we can also define r = (1 − c 0 )/T , where T denotes the time after which the learner is fully competent, which results in: Root: In the case of the linear form, the same number of new and more difficult, examples are added to the training set, at all times t. as the training data grows in size, it gets less likely that any single data example will be sampled in a training batch.
contrasting
train_22112
This is contrary to other results in the machine translation community (e.g., Vaswani et al., 2017), but could be explained by the fact that we are not using any learning rate schedule for training Transformers.
they never manage to outperform Transformers in terms of test BLEU score of the final model.
contrasting
train_22113
Back-translation has been dominantly used in these approaches, where pseudo sentence pairs are generated to train the translation systems with a reconstruction loss.
it is inefficient because the generated pseudo sentence pairs are usually of low quality.
contrasting
train_22114
Recently, neural-based methods (Chu et al., 2016;Grover and Mitra, 2017;Grégoire and Langlais, 2018) aim to select potential parallel sentences from monolingual corpora in the same domain.
these neural models need to be trained on a large parallel dataset first, which is not applicable to language pairs with limited supervision.
contrasting
train_22115
Most existing methods in comparable corpora mining introduce two encoders to represent sentences of two languages separately, and then use another network to measure the similarity (Chu et al., 2016;Grover and Mitra, 2017;Grégoire and Langlais, 2018).
owing to the shared encoders and decoders in language modeling, the semantic spaces of two languages are already strongly connected in our scenario.
contrasting
train_22116
These results validate the effectiveness of our approach and indicate that the proposed extract-edit learning framework can learn a better mapping and alignment between language spaces than back-translation.
if extracting only top-1 target sentence in our approach, the performances are not always improved (e.g., en → de, de → en, and en → ro).
contrasting
train_22117
(2018b) make encouraging progress on unsupervised NMT structure mainly based on initialization, denoising language modeling, and back-translation.
all these unsupervised models are based on the back-translation learning framework to generate pseudo language pairs for training.
contrasting
train_22118
Solving the problem could significantly improve data efficiency-a single multilingual model would be able to generalize and translate between any of the O(k 2 ) language pairs after being trained only on O(k) parallel corpora.
performance on zero-shot tasks is often unstable and significantly lags behind the supervised directions.
contrasting
train_22119
(2016) also show that resulting models seem to exhibit some degree of zero-shot generalization enabled by parameter sharing.
since we lack data for zero-shot directions, composite likelihood (3) misses the terms that correspond to the zero-shot models, and hence has no statistical guarantees for performance on zero-shot tasks.
contrasting
train_22120
(2017) would require training O(k 2 ) models to encompass all the pairs.
we use a single multilingual architecture which has more limited model capacity (although in theory, our approach is also compatible with using separate models for each direction).
contrasting
train_22121
While our approach gives small gains over these baselines, we believe the dataset's pecularities make it not reliable for evaluating zero-shot generalization.
on our proposed preprocessed IWSLT17 that eliminates the overlap and reduces the number of supervised directions (8), there is a considerable gap between the supervised and zeroshot performance of Basic.
contrasting
train_22122
Note that, the number of recurrence step T is allowed to be unequal to the length of input sequence J.
to RNN which recurs over the individual symbols of the input sequences, ARN recurrently revises its representations of all symbols in the sequence with an attention model.
contrasting
train_22123
A classic solution employs reinforcement learning (RL) to learn a dialog policy that models the optimal action distribution conditioned on the dialog state (Williams and .
since there are infinite human language possibilities, an enduring challenge has been to define what the action space is.
contrasting
train_22124
From Table 2, it appears that the word-level RL baseline performs better than Lite-Cat in terms of rewards.
figure 2 shows that the two LaRL models achieve strong task rewards with a much smaller performance drop in language quality (PPL), whereas the word-level model can only increase its task rewards by deviating significantly from natural language.
contrasting
train_22125
(2016a) encourage diversity by re-ranking the beam search results according to their mutual information with the conversation context.
as beam search itself often produces lists of nearly identical sequences, this method can require a large beam width (e.g.
contrasting
train_22126
Gaussian) z instead of a point encoding.
it suffers from the vanishing latent variable problem (Bowman et al., 2016;Zhao et al., 2017) when applied to text generation tasks.
contrasting
train_22127
Accordingly we refer this approach as SPACEFU-SION with path regularization.
to previous multi-task conversation model (Luan et al., 2017), where S2S and AE are trained alternately, our approach trains S2S and AE at the same time by minimizing the loss function of Equation 3.
contrasting
train_22128
End-to-end neural networks trained for these task-oriented dialogs are expected to be immune to any changes in the KB.
existing approaches breakdown when asked to handle such changes.
contrasting
train_22129
(2018) showed how the reported high performance of multi-prototype techniques in this dataset was not due to an accurate sense representation, but rather to a subsampling effect, which had not been controlled for in similarity datasets.
a context-insensitive word embedding model would perform no better than a random baseline on our dataset.
contrasting
train_22130
Isidora, therefore, is the city of his dreams… Figure 1: Calvino labels the thematically-similar cities in the top row as cities & the dead.
although the bottom two cities share a theme of desire, he assigns them to different groups.
contrasting
train_22131
The two cities appear largely dissimilar: Isaura is a city with a thousand wells dug by its inhabitants, while Armilla is an "unfinished" city without walls, ceilings, or floors.
both cities' descriptions mention supernatural beings living underground.
contrasting
train_22132
All these prior approaches were evaluated on existing datasets.
we perform studies on PAWS, a new dataset that emphasizes the importance of capturing structural information in representation learning.
contrasting
train_22133
Our preliminary results showed that the distribution of paraphrase to nonparaphrases from this method is highly imbalanced (about 1:4 ratio).
we seek to create a balanced dataset, so we use an additional strategy based on back translation-which has the opposite label distribution and also produces greater diversity of paraphrases while still maintaining a high BOW overlap.
contrasting
train_22134
GEC models have been previously evaluated based on a single commonly applied corpus: the CoNLL-2014 benchmark.
the evaluation remains incomplete because the task difficulty varies depending on the test corpus and conditions such as the proficiency levels of the writers and essay topics.
contrasting
train_22135
This field pre-viously focused on improving parsing accuracy on Penn Treebank (Marcus et al., 1993).
robustness was largely improved by evaluation using multiple corpora including Ontonotes (Hovy et al., 2006) and Google Web Treebank (Petrov and McDonald, 2012).
contrasting
train_22136
Transformer performs best on CoNLL-2014.
it exhibits thirdbest performance among FCE, KJ, and ICNALE; LSTM outperforms the other models by a large margin of up to 5.3 F 0.5 points.
contrasting
train_22137
Similar to (Joty and Hoque, 2016), we could use h i to classify sentence x i into one of the speech act types using a Softmax output layer.
in that case, we would disregard the discourse-level dependencies between sentences in a conversation.
contrasting
train_22138
In this case, since the out-ofdomain labeled dataset (MRDA) is much larger, it overwhelms the model inducing features that are not relevant for the task in the target domain.
when we provide the models with some labeled in-domain examples in the semisupervised (50%) setting, we observe about 11% absolute gains in QC3 and BC3 over the corresponding Merge baselines, and 7 -8% gains over the corresponding Fine-tune baselines.
contrasting
train_22139
continuing the classification with the positively categorized instances of the previous step.
the independent classification shows already that the amount of data is insufficient.
contrasting
train_22140
The SemEval task was to estimate the intensity of a given tweet and its corresponding emotion.
in this study, we utilize the labeled dataset only to classify the tweets into four emotion categories and use the training, development and test sets provided in this dataset in our experiments.
contrasting
train_22141
In this approach, adversarial examples are constructed through an optimization process that uses gradient descent to search for input examples that maximally change the predictions of a model.
developing attacks with only black-box access to a model (no access to gradients) is still under-explored in NLP.
contrasting
train_22142
Our model identifies semantically-relevant parts of documents and locally integrates their representations through clustering and autoencoding.
to averaging, our model prevents larger semantically-relevant parts of inputs to dominate final representations.
contrasting
train_22143
As offensive content has become pervasive in social media, there has been much research in identifying potentially offensive messages.
previous work on this topic did not consider the problem as a whole, but rather focused on detecting very specific types of offensive content, e.g., hate speech, cyberbulling, or cyber-aggression.
contrasting
train_22144
For instance, in the following sentence "... , which binds to the enhancer A located in the promoter of the mouse MHC class I gene H-2Kb, ...", when determining the trigger type of binds, we need to carefully select its contextual words, such as H-2Kb, which indicates the object of binds.
binds and H-2Kb are sepa- [trigger] [argument] The framework of the KB-driven Tree-LSTM model.
contrasting
train_22145
We can see that, without using KB information, the Tree-LSTM mistakenly predict the argument role of E1 as None.
by incorporating KB concept embeddings, especially the information from the function description positive regulation of transcription, DNA-templated for Tax, our approach successfully promotes the probability of E1 being predicted as the Theme of E2.
contrasting
train_22146
Moreover, the additional data should be similar enough to existing training data to be helpful.
crafting new features requires creativity and collaboration with subject matter experts, and the implementation can be time consuming.
contrasting
train_22147
Generative Adversarial Networks The idea of aligning representations by making them indistinguishable is inspired by GAN (Goodfellow et al., 2014), where a generator produces fake images (or other data) that are as similar to real data as possible.
our model does not have a generator component as GANs do.
contrasting
train_22148
The first two rows present the average lengths of each utterance of counselors and clients in terms of words and characters, showing there is a small difference between the counselor utterance and the client utterance.
the average number of utterances in a single counseling session differs; on average, clients write more utterances than counselors.
contrasting
train_22149
Modern neural architectures for NLP are highly effective when provided a large amount of labelled training data (Zhang et al., 2015;Conneau et al., 2017;Bowman et al., 2015).
a large labelled data set is not always readily accessible due to the high cost of expertise needed for labelling or even due to legal barriers.
contrasting
train_22150
This produces dramatic improvements over a range of NLP tasks where appropriate unlabelled data is available (Peters et al., 2017(Peters et al., , 2018Akbik et al., 2018;Devlin et al., 2019).
there is still a lack of systematic study on how to select appropriate data to pretrain word vectors or LMs.
contrasting
train_22151
The question in domain adaptation is usually framed as 'Given a source and a target, how to transfer?'.
the question we address is 'Given a specific target, which source to choose from?'.
contrasting
train_22152
On one hand, it is expected that PubMed is similar to CRAFT and JNLPBA, since they are all sampled journal articles about biology and health, thus being similar in terms of both field and tenor.
although ScienceIE does not have the same field as PubMed (computer science, material and physics versus biology and health), they are similar because they share a similar tenor (scholarly publications).
contrasting
train_22153
The field of CADEC is therefore more similar to PubMed which includes journal articles in health discipline and MIMIC which contains clinical notes.
cADEc is written by patients, and can be considered as 'drug reviews'.
contrasting
train_22154
A practical approach for collecting a suf-ficiently large corpus would be to use crowdsourcing platforms like Amazon Mechanical Turk (MTurk).
crowd workers in general are likely to provide noisy annotations (Abad and Moschitti, 2016;Plank et al., 2014;Alonso et al., 2015), an issue exacerbated by the technical nature of specialized content.
contrasting
train_22155
The model with best precision is different for Patient, Intervention and Outcome labels.
re-weighting by difficulty does consistently yield the best recall for all three extraction types, with the most notable improvement for I and O, where recall improved by 10 percentage points.
contrasting
train_22156
Obtaining expert annotations for the one thousand most difficult instances greatly improved the model performance.
the choice of how many difficult instances to annotate was an uninformed choice.
contrasting
train_22157
Neural approaches seem to be stronger for languages with complex case systems and agglutinative morphology.
edit-tree approaches excel on more synthetic languages (e.g.
contrasting
train_22158
This is most likely due to the fact that, from an edit-tree approach perspective, a large number of trees creates a large number of classes, which leads to higher class imbalance and more sparsity.
edit-tree based approaches do outperform representation learning methods for languages with lower number of trees, which leads to the intuition that the edit-tree formalism does provide a useful inductive bias to the task of lemmatization and it should not be discarded in future work.
contrasting
train_22159
Comparing encoder representations to decoder representations, it is interesting to see that in several cases the decoder side representations performed better than the encoder side ones, even though the former were trained using a uni-directional LSTM.
since there is no difference in the general trends between the encoder-and the decoder-side representations, below we focus on the encoder-side only.
contrasting
train_22160
For English, the subword (BPE and Morfessor) and the character representations yielded comparable results.
for German, BPE performed better.
contrasting
train_22161
9 We can see that combinations involving characters (B+C, W+C in the table) yield larger improvement compared to combining wordand BPE-based representations (W+B).
combining all three performed best for all languages and for all tasks.
contrasting
train_22162
We can see that in most cases, the subword-based systems perform better than the word-based and the character-based ones.
this is not true in the case of using their representations as features for a core NLP task as in our experiments above.
contrasting
train_22163
Lemmatization has previously been shown to improve recall for information retrieval (Kanis and Skorkovská, 2010;Monz and De Rijke, 2001), to aid machine translation (Fraser et al., 2012;Chahuneau et al., 2013) and is a core part of modern parsing systems (Björkelund et al., 2010;Zeman et al., 2018).
the task is quite nuanced as the proper choice of the lemma is context dependent.
contrasting
train_22164
A key feature of our model is its simplicity: Our contribution is to show how to stitch existing models together into a joint model, explaining how to train and decode the model.
despite the model's simplicity, it still achieves a significant improvement over the state of the art on our target task: lemmatization.
contrasting
train_22165
Principles are universally true for all languages.
languages are also governed by parameters.
contrasting
train_22166
The likelihood of this model is now convex in the parameter embeddings.
to the full matrix factorisation setting, here, all language-specific knowledge must come from an external source, namely, the unlabelled text.
contrasting
train_22167
Brown clustering is a predictable, bottom-up, agglomerative, hard clustering algorithm that for the same hyper-parameter k, generates the same clusters and therefore only one data sample.
2 the Exchange algorithm is an iterative clustering algorithm that has a complete and valid cluster partitioning at the end of each iteration.
contrasting
train_22168
show as a verb vs show as a noun.
czech is highly inflected accounting for gender, case, number and person.
contrasting
train_22169
Note that the choice of word types w T and w S to calculate CVL is arbitrary.
cVL is only meaningful when the two word types are semantically related, such as word translations, because those word pairs are where the knowledge transfer takes place.
contrasting
train_22170
This term usually cannot be estimated in practice since the labels for target documents are unavailable.
we can still calculate this term for the purpose of analysis.
contrasting
train_22171
In the middle panel of Figure 2, CVL over all word pairs from topic words is decreasing as sampling proceeds and becomes stable by the end of sampling.
the correlations between CNPMI and CVL are constantly decreasing.
contrasting
train_22172
The larger the crosslingual entropy, the harder it is to get a low CVL because it needs larger monolingual entropy to decrease the bound, as shown in Section 2.4.
the inner product of word pairs shows an opposite pattern of CVL, indicating a negative correlation (Lemma 1).
contrasting
train_22173
In Figure 2 we see the correlation between CNPMI and CVL is around −0.4 at the end of sampling, so there are fewer clear patterns for CNPMI in Figure 4.
we also notice that the word pairs with higher CNPMI scores often appear at the bottom where crosslingual entropy is low while the monolingual entropy is high.
contrasting
train_22174
McDonald and Nivre (2007) and Mc-Donald and Nivre (2011) have shown that historybased features enhance transition-based parsers as long as they do not suffer from error propagation.
nivre (2006) has also shown that lookahead features are absolutely crucial given the greedy left-to-right parsing strategy.
contrasting
train_22175
We hypothesise that arc depth might impact the usefulness of composition, since more depth means more matrix multiplications with the composition function.
we find no correlation between average arc depth of the treebanks and usefulness of composition.
contrasting
train_22176
Early work focused on supervised methods maximizes the similarity of the embeddings of words that exist in a manually-created dictionary, according to some similarity metric (Mikolov et al., 2013;Faruqui and Dyer, 2014;Jawanpuria et al., 2018;.
recently proposed unsupervised methods frame this problem as minimization of some form of distance between the whole set of discrete word vectors in the chosen vocabulary, e.g.
contrasting
train_22177
Also, DeMa-BME demonstrates notably better performance on distant language pairs (en-ru, enja and en-zh) over other unsupervised methods, which often achieve good performance on etymologically close languages but fail to converge on the distant language pairs.
when the dictionary is initialized with identical strings for SL-unsup, we obtain decent results on these languages.
contrasting
train_22178
Most previous work in this field focuses on extracting phrases from target posts or selecting candidates from a pre-defined list Zhang et al., 2017).
hashtags usually appear in neither the target posts nor the given candidate list.
contrasting
train_22179
Topic models are also widely applied to induce topic words as hashtags (Krestel et al., 2009;Ding et al., 2012;Godin et al., 2013;Gong et al., 2015;.
these models are usually unable to produce phrase-level hashtags, which can be achieved by ours via generating hashtag word sequences with a decoder.
contrasting
train_22180
We then explore three shielding methods-visual character embeddings, adversarial training, and rule-based recovery-which substantially improve the robustness of the models.
the shielding methods still fall behind performances achieved in non-attack scenarios, which demonstrates the difficulty of dealing with visual attacks.
contrasting
train_22181
Recently, some NLP systems have exploited visual features to capture visual relationships among characters in compositional writing systems such as Chinese or Korean (Liu et al., 2017).
in more general cases, current neural NLP systems have no built-in notion of visual character similarity.
contrasting
train_22182
Numbers, especially, are more difficult to recover.
even at 80% disturbance level, humans can, on average, correctly recover at least 93% of all characters in the input text in all conditions.
contrasting
train_22183
VIPER with DCES and p = 0.1 achieves a success rate of 24.1%-i.e., roughly one fourth of the toxic comments receive a lower TL.
the he is alᶊo a fagᶢoƭ .
contrasting
train_22184
To prevent introducing bias in this manual filtering step, we define each relation descriptor directly as its five nearest neighboring words in the input vocabulary.
since the full vocabulary would contain many uncommon words which could hinder the interpretation of relation descriptors, we limit both models to choose descriptor words from the most frequently occurring 500 words in their own processed input vocabulary (i.e., verb predicates only for LARN and all words for RMN).
contrasting
train_22185
LEAD-1 achieves ROUGE (1/2/L)scores of 27.5/9.6/23.7 respectively.
our selector achieves scores of 30.2/12.2/26.45 which presents an improvement of over 10% in each category.
contrasting
train_22186
Generating text is a core part of many NLP tasks such as image captioning (Lin et al., 2014), opendomain dialogue , story generation (Roemmele, 2016), and summarization (Nallapati et al., 2016).
proper evaluation of natural language generation has proven difficult (Liu et al., 2016;Novikova et al., 2017;Chaganty et al., 2018).
contrasting
train_22187
First, we show that human evaluation alone is insufficient to discriminate model generations from the references, leading to inflated estimates of model performance.
hUSE is able to reveal deficiencies of current models.
contrasting
train_22188
In the spirit of the Turing Test, we could consider using the error rate of a human discriminator f hum instead, often considered the gold standard for evaluation.
while humans might have knowledge of p ref , they do not have full knowledge of p model and thus would have difficulties determining which sentences a model cannot generate.
contrasting
train_22189
This is analogous to learning the discriminator in a Generative Adversarial Network (GAN) (Goodfellow et al., 2014) or learning an evaluation metric from human judgments (Lowe et al., 2017).
as (x, y) are high-dimensional objects, training a good classifier is extremely difficult (and perhaps not significantly easier than solving the original generation problem).
contrasting
train_22190
Table 1 shows that single-sentence language models are nearly indistinguishable, with HUSE = 0.86 and implied discriminator error of 43%.
both summarization and dialogue are highly distinguishable (HUSE ≈ 0.5) with relatively low quality when sampled from t = 1.0.
contrasting
train_22191
Examining the samples in this top-right region reveals that these are news stories with short headlines such as "Nadal pulls out of Sydney International" which can be reliably generated even at t = 1.0.
the model frequently generates low quality samples that can easily be distinguished such as "two new vaccines in the poor countries were effective against go-it-alone study says" (Table 2).
contrasting
train_22192
This results in two incomparable evaluation metrics, which prevent us from reasoning about tradeoffs between diversity and quality.
hUSE allows us to make precise statements about the tradeoffs between model quality and diversity because it is a single metric which decomposes into diversity and quality terms.
contrasting
train_22193
Task success combined with dialogue cost can be linked to human judgments like user satisfaction via the PARADISE framework (Walker et al., 1997).
in chitchat tasks, which we study in this work, automatic metrics and their relation to human ratings are less well-understood.
contrasting
train_22194
Our most engaging model, which controls both repetition and question-asking -marked 'Question (CT)' in Figure 3 (left) -matches the engagingness of the winning entry in the ConvAI2 competition, as both models achieve a raw score 7 of 3.1 (Dinan et al., 2019).
the Con-vAI2 winner, Lost in Conversation, was trained on approximately 12× as much data as our model.
contrasting
train_22195
8 For full details of these scores, see Appendix F and H. For time and budget reasons, we did not evaluate any models controlling both question-asking and specificity.
we expect it is possible to obtain further improvements by doing so.
contrasting
train_22196
Globally normalized neural sequence models are considered superior to their locally normalized equivalents because they may ameliorate the effects of label bias.
when considering high-capacity neural parametrizations that condition on the whole input sequence, both model classes are theoretically equivalent in terms of the distributions they are capable of representing.
contrasting
train_22197
When conditioned on the the full input sequence and the entire prediction history, both locally normalized and globally normalized conditional models should have same expressive power under a highcapacity neural parametrization in theory, as they can both model same set of distributions over all finite length output sequences (Smith and Johnson, 2007).
locally normalized models are constrained in how they respond to search errors during training since the scores at each decoding step must sum to one.
contrasting
train_22198
(2016) prove that this class of locally normalized models that relies on the structural assumption of access to only left-to-right partial input at each step, is strictly less expressive than its globally normalized counterpart.
the standard sequence-to-sequence models used most often in practice and presented in this paper actually condition the decoder on a summary representation of the entire input sequence, x, computed by a neural encoder.
contrasting
train_22199
This illustrates a mechanism by which search-aware training of globally normalized models in a large search spaces might Soft Beam Computation Soft-k-argmax be more effective.
as discussed earlier, if we can perform exact search then this label bias ceases to exist because both the models have the same expressive power with a search-agnostic optimization scheme.
contrasting