id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_4100
SL and RL correspond to different methods of updating weights, so both can be applied to the same network.
there is no guarantee that the optimal RL policy will agree with the SL training set; therefore, after each RL gradient step, we check whether the updated policy reconstructs the training set.
contrasting
train_4101
This construction intersects the languages of G and F (r s ), but because F (r s ) accepts all strings over the alphabet, the languages of G and G will be the same (namely, all REs for o u ).
the weights in G are inherited from F (r s ); thus the weight of each RE in L(G ) is the edit cost from r s .
contrasting
train_4102
That is to say, the error signal back propagated from the word sequence would not affect the structural label RNN, and vice versa.
in the Hierarchical RNN encoder, the error signal back propagated from the word sequence has a direct impact on the structural label annotation vectors, and thus on the structural label embeddings.
contrasting
train_4103
Our approach is similar to theirs by using source-side phrase parse tree.
our Mixed RNN system, for example, incorporates syntax information by learning annotation vectors of syntactic labels and words stitchingly, but is still a sequenceto-sequence model, with no extra parameters and with less increased training time.
contrasting
train_4104
(2016) design a few experiments to investigate if the NMT system without external linguistic input is capable of learning syntactic information on the source-side as a by-product of training.
their work is not focusing on improving NMT with linguistic input.
contrasting
train_4105
"NMT" refers to the translation result from a conventional NMT model, which fails to capture the long distance word relation denoted by the dashed arrow.
it is not trivial to build and leverage syntactic structures on the target side in current NMT framework.
contrasting
train_4106
EmoTweet-28 is a useful resource.
the agreement between annotators is not high and the set of assigned labels do not adhere to a specific theory of emotion.
contrasting
train_4107
This study analyzed user-level political ideology through Twitter posts.
to previous work, we made use of a novel data set where finegrained user political ideology labels are obtained through surveys as opposed to binary self-reports.
contrasting
train_4108
This knowledge is then used as input to PSL Model 1 via the rule: UNIGRAM F (T, U) !FRAME(T, F) (shown in line 1 of Table 3).
not every tweet will have a unigram that matches those in this list.
contrasting
train_4109
In preliminary experiments for the hybrid models, we found that selecting the same vocabulary of 30K words for the source and target based on combined frequency was better (.003 in F 0.5 ) and use that method for vocabulary selection instead.
there was no gain observed by using such a vocabulary selection method in the baseline.
contrasting
train_4110
We conjecture that terms in scientific literature are often overloaded (e.g., a tree could be a parse tree or a decision tree), necessitating some abstraction when representing ideas.
news stories are more self-contained and seek to employ consistent usage.
contrasting
train_4111
The procedure generally builds on steps designed to assemble the SICK corpus, which contains pairs of English sentences annotated for semantic relatedness and entailment, because we aim at building a comparable dataset.
the implementation of particular building steps significantly differs from the original SICK design assumptions, which is caused by both lack of necessary extraneous resources for an investigated language and the need for language-specific transformation rules.
contrasting
train_4112
The standard measure of inter-annotator agreement in various natural language labelling tasks is Cohen's kappa (Cohen, 1960).
this coefficient is designed to measure agreement between two annotators only.
contrasting
train_4113
As we aim at building an evalua-tion dataset which is comparable to the SICK corpus, the general assumptions of our procedure correspond to the design principles of the SICK corpus.
the procedure of building the SICK corpus cannot be adapted without modifications.
contrasting
train_4114
The presented procedure of building a dataset was tested on Polish.
it is very likely that the annotation framework will work for other Slavic languages (e.g.
contrasting
train_4115
The question is answerable simply by noticing one sentence, without needing to fully understand the content of the text.
consider the second example from MCTest (Richardson et al., 2013), which was written for children and is easy to read.
contrasting
train_4116
We applied these metrics only to sentences that needed to be read in answering questions.
because these metrics were proposed for human readability, they do not necessarily correlate with those used in RC systems.
contrasting
train_4117
This appears to have been caused by the automated sourcing methods, which may generate a separation between the contents of the context and question (i.e., web segments and a search query in MS MARCO, and a context article and question article in Who-did-What).
newsQA had no nonsense questions.
contrasting
train_4118
Although its questions are created automatically, they are sophisticated in terms of knowledge reasoning.
the automated sourcing method must be improved to exclude nonsense questions.
contrasting
train_4119
#NAME?
one problem is that the dataset contained nonsense questions.
contrasting
train_4120
Most existing RC datasets use such texts because of their availability.
narrative texts may have a closer correspondence to our everyday experience, involving the emotions and intentions of characters (Graesser et al., 1994).
contrasting
train_4121
QA4MRE required the longest distance because readers had to look for clues in the long context texts.
sQuAD and Ms MARCO had lower scores.
contrasting
train_4122
This transforms the parsing problem back into a sequence-to-sequence problem, while making it easy to force the decoder to take only actions guaranteed to produce well-formed outputs.
transition-based models do not admit fast dynamic programs and require careful feature engineering to support exact search-based inference (Thang et al., 2015).
contrasting
train_4123
These models enjoy a number of appealing formal properties, including support for exact inference and structured loss functions.
previous chart-based approaches have required considerable scaffolding beyond a simple well-formedness potential, e.g.
contrasting
train_4124
As with the chart parsing formulation, we also use a margin-based method for learning under the topdown model.
rather than requiring separation between the scores of full trees, we instead enforce a local margin at every decision point.
contrasting
train_4125
(2016b) and Cai and Zhao (2016), we use word context on top of character context.
words play a relatively less important role in our model, and we find that word LSTM, which has been used by all previous neural segmentation work, is unnecessary for our model.
contrasting
train_4126
Because this is a significant problem for neural language and translation models, there are a number of methods proposed to resolve this problem, which we detail in Section 2.2.
none of these previous methods simultaneously satisfies the following desiderata, all of which, we argue, are desirable for practical use in NMT systems: Memory efficiency: The method should not require large memory to store the parameters and calculated vectors to maintain scalability in resource-constrained environments.
contrasting
train_4127
The hierarchical softmax (Morin and Bengio, 2005) predicts each word based on binary decision and reduces computation time to O(H log V ).
this method still requires O(HV ) space for the parameters, and requires calculation much more complicated than the standard softmax, particularly at test time.
contrasting
train_4128
Sampling-based approximations (Mnih and Teh, 2012;Mikolov et al., 2013) to the denominator of the softmax have also been proposed to reduce calculation at training.
these methods are basically not able to be applied at test time, still require heavy computation like the standard softmax.
contrasting
train_4129
We can easily obtain the maximum-probability bit array from q by simply assuming the i-th bit is 1 if q i ≥ 1/2, or 0 otherwise.
this calculation may generate invalid bit arrays which do not correspond to actual words according to the mapping between words and bit arrays.
contrasting
train_4130
This is expected, as described in Section 3, because using raw bit arrays causes many one-off estimation errors at the output layer due to the lack of robustness of the output representation.
hybrid-N and Binary-EC models clearly improve BLEU from Binary, and they approach that of Softmax.
contrasting
train_4131
Neural machine translation (MT) models obtain state-of-the-art performance while maintaining a simple, end-to-end architecture.
little is known about what these models learn about source and target languages during the training process.
contrasting
train_4132
The improved translation quality is often attributed to better handling of non-local dependencies and morphology generation (Luong and Manning, 2015;Bentivogli et al., 2016;Toral and Sánchez-Cartagena, 2017).
little is known about what and how much these models learn about each language and its features.
contrasting
train_4133
(2016), who use hidden vectors from a neural MT encoder to predict syntactic properties on the English source side.
we focus on representations in morphologically-rich languages and evaluate both source and target sides across several criteria.
contrasting
train_4134
Recently, a number of approaches to multimodal sentiment analysis, producing interesting results, have been proposed (Pérez-Rosas et al., 2013;Wollmer et al., 2013;Poria et al., 2015).
there are major issues that remain unaddressed.
contrasting
train_4135
Normally, doing something similar, i.e., monotonous or repetitive might be perceived as negative.
the nearby utterances "It engages the audience more", "they took a new spin on it", "and I just loved it" indicate a positive context.
contrasting
train_4136
The textual modality, combined with non-textual modes, boosts the performance in IEMOCAP by a large margin.
the margin is less in the other datasets.
contrasting
train_4137
The performance was poor for audio and textual modality as the MOUD dataset is in Spanish while the model is trained on MOSI dataset, which is in English language.
notably the visual modality performs better than the other two modalities in this experiment, which means that in cross-lingual scenarios facial expressions carry more generalized, robust information than audio and textual modalities.
contrasting
train_4138
For the fusion, the hierarchical fusion framework was used.
information from neighboring utterances, e.g., "And I really enjoyed it" and "The countryside which they showed while going through Ireland was astoundingly beautiful" indicate its positive context and help our contextual model to classify the target utterance correctly.
contrasting
train_4139
For example, the utterance "who doesn't have any presence or greatness at all" was classified as positive by the audio classifier (as "presence and greatness at all" was spoken with enthusiasm).
the textual modality caught the negation induced by "doesn't" and classified it correctly.
contrasting
train_4140
The lexicon designers then manually contructed lexicons for each category, augmenting their intuitions by using distributional statistics to suggest words that may have been missed (Pennebaker et al., 2015).
we follow the approach of Biber (1991), using multidimensional analysis to identify latent groupings of markers based on co-occurrence statistics.
contrasting
train_4141
The chance rate would be 50%, so this supports Hypothesis I and 9 It is possible that inflections are semantically similar, because by definition they are changes in the form of a word to mark distinctions such as tense, person, or number.
different inflections of a single word form might be used to mark different stances (e.g., some stances might be associated with the past while others might be associated with the present or future).
contrasting
train_4142
The relatively modest effect sizes are unsurprising, given the short length of the texts.
these differences lend insight to the relationship between formality and politeness, which may seem to be closely related concepts.
contrasting
train_4143
In addition to facilitating discovery of topical trends (Gardner et al., 2010), topic modeling is used for a wide variety of problems including document classification (Rubin et al., 2012), information retrieval (Wei and Croft, 2006), author identification (Rosen-Zvi et al., 2004), and sentiment analysis (Titov and McDonald, 2008).
the most compelling use of topic models is to help users understand large datasets (Chuang et al., 2012).
contrasting
train_4144
Once Q is constructed, topic recovery requires O(KV 2 + K 2 V I), where K is the number of topics, V is the vocabulary size, and I is the average number of iterations (typically 100-1000).
traditional topic Anchor Top Words in Topics backpack backpack camera lens bag room carry fit cameras equipment comfortable camera camera lens pictures canon digital lenses batteries filter mm photos bag bag camera diaper lens bags genie smell room diapers odor Table 1: Three separate attempts to construct a topic concerning camera bags in Amazon product reviews with single word anchors.
contrasting
train_4145
The anchor word "backpack" may seem strange.
this dataset contains nothing about regular backpacks; thus, "backpack" is unique to camera bags.
contrasting
train_4146
This is in harmony with our quantitative measurements of topic coherence, and may be the result of our stopping criteria: when users judged the topics to be useful.
100% of our users feel that the topics created through interaction were better than those generated from Gram-Schmidt anchors.
contrasting
train_4147
Recurrent networks (RNNs) (Elman, 1990) have recently become very popular for sequence tagging tasks such as entity extraction that involves a set of contiguous tokens.
their ability to identify relations between non-adjacent tokens in a sequence, e.g., the head nouns of two entities, is less explored.
contrasting
train_4148
During training, we pass the gold label embedding to the next time step which enables better training of our model.
at test time when the gold label is not available we use the predicted label at previous time step as input to the current step.
contrasting
train_4149
Thus, for our approach described so far if we only compute the argmax on our objective then we limit our model to output only one relation label per token.
from our analysis of the dataset, an entity may be related to more than one entity in the sentence.
contrasting
train_4150
We can also use Sparsemax (Martins and Astudillo, 2016) instead of softmax which is more suitable for sparse distributions.
we leave it for future work.
contrasting
train_4151
Table 4 clearly shows that the overall performance of all methods is lower for verb similarity.
the improvement using both signed clustering as well as thesaurus look is also larger.
contrasting
train_4152
based on manually engineered semantic grammars (Jia and Liang, 2016)) and learning through direct interaction with users (e.g., where a single user teaches the model new concepts (Wang et al., 2016)).
there are unique advantages to our approach, including showing (1) that non-linguists can write SQL to encode complex, compositional computations (see Fig 1 for an example), (2) that external paraphrase resources and the structure of facts from the target database itself can be used for effective data augmentation, and (3) that actual database users can effectively drive the overall learning by simply providing feedback about what the model is currently getting correct.
contrasting
train_4153
All three of these languages are modeled after natural language to simplify parsing.
none of them is used to query databases outside of the semantic parsing literature; therefore, they are understood by few people and not supported by standard database implementations.
contrasting
train_4154
GUSP (Poon, 2013) creates an intermediate representation that is then deterministically converted to SQL to obtain an accuracy of 74.8% on ATIS, which is boosted to 83.5% using manually introduced disambiguation rules.
it requires a lot of SQL specific engineering (for example, special nodes for argmax) and is hard to extend to more complex SQL queries.
contrasting
train_4155
To date, most efforts to leverage discourse information to detect salient content from dialogues have focused on encoding gold-standard discourse relations as features for use in classifier training (Murray et al., 2006;Galley, 2006;McKeown et al., 2007;Bui et al., 2009).
automatic discourse parsing in dialogues is still a challenging problem (Perret et al., 2016).
contrasting
train_4156
This makes the problem computationally easier by enabling the use of maximum spanning tree-style parsing ap- proaches.
argumentation in the wild can be less well-formed.
contrasting
train_4157
Possible link types are reason and evidence, and propo-sition types are split into five fine-grained categories: POLICY and VALUE contain subjective judgements/interpretations, where only the former specifies a specific course of action to be taken.
tEStIMONY and FACt do not contain subjective expressions, the former being about personal experience, or "anecdotal."
contrasting
train_4158
Xiao and Cho (2016) extended that architecture by inserting a recurrent neural network layer between the convolutional layer and the classification layer.
our contributions follow Ko et al.
contrasting
train_4159
(2010) developed a two-step approach by first predicting implicit connectives whose sense is then disambiguated to obtain the relation.
the pipeline approach usually suffers from error propagation, and the method itself has relied on hand-crafted features which do not necessarily generalize well.
contrasting
train_4160
Our experiments will demonstrate that ∆ ρ can be accurately learned from data.
the features we used for this are not factorizable over the edges of the latent trees.
contrasting
train_4161
Note that Björkelund and Kuhn (2014) perform inexact search on the same latent tree structures to extend the model to non-local features.
to our approach, they use beam search and accumulate the early updates.
contrasting
train_4162
Lexical resources such as dictionaries and gazetteers are often used as auxiliary data for tasks such as part-of-speech induction and named-entity recognition.
discriminative training with lexical features requires annotated data to reliably estimate the lexical feature weights and may result in overfitting the lexical features at the expense of features which generalize better.
contrasting
train_4163
Particle filtering is typically used in online settings, including word segmentation (Borschinger and Johnson, 2011), to make decisions before all of x has been observed.
we are interested in the inference (or smoothing) problem that conditions on all of x (Dubbin and Blunsom, 2012;Tripuraneni et al., 2015).
contrasting
train_4164
If monthly is listed in (only) the adjective lexicon, this tells us that P ADJ sometimes generates monthly and therefore that H ADJ may also tend to generate other words that end with -ly.
for us, P ADV (monthly) > 0 as well, allowing us to still correctly treat monthly as a possible adverb if we later encounter it in a training or test corpus.
contrasting
train_4165
(2017) based on appending domain tags.
our method is different from the above methods in that we apply domain adaptation techniques to the outputs of a generative model rather than a natural data domain.
contrasting
train_4166
Various neural models based on attention mechanisms (Wang and Jiang, 2016;Seo et al., 2016;Xiong et al., 2016;Dhingra et al., 2016;Kadlec et al., 2016;Trischler et al., 2016b;Sordoni et al., 2016;Cui et al., 2016; have been proposed to tackle the tasks of question answering and reading comprehension.
the performance of these neural models largely relies on a large amount of labeled data available for training.
contrasting
train_4167
model policy puts near-uniform probability over the decisions at each time step.
this causes shorter programs to have orders of magnitude higher probability than longer programs, as illustrated in Figure 2 and as we empirically observe.
contrasting
train_4168
In REINFORCE, a common solution is to sample from an -greedy variant of the current policy.
mmL exploration with beam search is deterministic.
contrasting
train_4169
In our experiments, J MML performs significantly better than J RL (Table 4).
while J MML assigns uniform weight across examples, it is still not uniform over the programs within each example.
contrasting
train_4170
More uniform upweighting across reward-earning programs leads to higher accuracy and fewer spurious programs, especially in SCENE.
no single value of β performs best over all domains.
contrasting
train_4171
On the depicted training example, BS-MML actually achieves higher expected reward / marginal probability than RANDOMER, but it does so by putting most of its probability on a spurious programa form of overfitting.
rANDOMEr spreads probability mass over multiple rewardearning programs, including the correct ones.
contrasting
train_4172
This could be because their method is very aggressive in dealing with the history (as explained later in the Experiments section).
our method has a better way of handling history (by passing context vectors through an LSTM recurrent network) which gives us the flexibility to forget/retain some portions of the history and at the same time produce diverse context vectors at successive time steps.
contrasting
train_4173
Comparison with baseline diversity models: The baseline diversity model M1 performs at par with our models D1 and SD1 but not as good as D2 and SD2.
the model M2 performs very poorly.
contrasting
train_4174
Due to the difficulty of abstractive summarization, the great majority of past work has been extractive (Kupiec et al., 1995;Paice, 1990;Saggion and Poibeau, 2013).
the recent success of sequence-to-sequence models (Sutskever Figure 2: Baseline sequence-to-sequence model with attention.
contrasting
train_4175
In general, ROUGE-1 and ROUGE-2 were considered as the baselines for validating the performance of AP because these variants strongly correlate with human evaluation methods (Owczarzak et al., 2012a,b).
the comparison could be repeated with ROUGE-3, ROUGE-4 and ROUGE-BE, which have been found to predict manual Pyramid better than ROUGE-1 and ROUGE-2 (Rankel et al., 2013).
contrasting
train_4176
Some previous works apply this framework to summarization generation tasks Gu et al., 2016;Gulcehre et al., 2016).
abstractive sentence summarization is different from MT in two ways.
contrasting
train_4177
The most widely used metric for evaluating such dialogue systems is BLEU (Papineni et al., 2002), a metric measuring word overlaps originally developed for machine translation.
it has been shown that BLEU and other word-overlap metrics are biased and correlate poorly with human judgements of response quality (Liu et al., 2016).
contrasting
train_4178
For example, the two responses in Figure 1 are equally appropriate given the context.
if we simply change the context to: "Have you heard of any good movies recently?
contrasting
train_4179
For training VHRED, we use a context embedding size of 2000.
we found the ADEM model learned more effectively when this embedding size was reduced.
contrasting
train_4180
Compared to evaluating a single response, this evaluation is arguably closer to the end-goal of chatbots.
such an evaluation is extremely challenging to do in a completely automatic way.
contrasting
train_4181
Using a feedforward NN and embedding features, TUPA MLP obtains higher scores than TUPA Sparse , but is outperformed by the LSTM parser on primary edges.
using better input encoding allowing virtual look-ahead and look-behind in the token representation, TUPA BiLSTM obtains substantially higher scores than TUPA MLP and all other parsers, on both primary and remote edges, both in the in-domain and out-of-domain settings.
contrasting
train_4182
Several transition-based AMR parsers have been proposed: CAMR assumes syntactically parsed input, processing dependency trees into AMR (Wang et al., 2015a(Wang et al., ,b, 2016Goodman et al., 2016).
the parsers of Damonte et al.
contrasting
train_4183
In some instances (as in the figure), our system is nonetheless able to synthesize a close approximation.
in the most complex cases, the predictions deviate significantly from the correct implementation.
contrasting
train_4184
NMT proves to outperform conventional statistical machine translation (SMT) significantly across a variety of language pairs (Junczys-Dowmunt et al., 2016) and becomes the new de facto method in practical MT systems .
there still remains a severe challenge: it is hard to interpret the internal workings of NMT.
contrasting
train_4185
Defined on language structures with varying granularities, these translation rules are interpretable from a linguistic perspective.
nMT takes an end-to-end approach: all internal information is represented as real-valued vectors or * Corresponding author.
contrasting
train_4186
As test cases, we use POS tagging and Named Entity Recognition, both standard preprocessing steps for many NLP applications.
our approach is general and can also be applied to other classification tasks.
contrasting
train_4187
Manual annotations are often inconsistent and annotation errors can thus be identified by looking at the variance in the data.
to this, we focus on detecting errors in automatically labelled data.
contrasting
train_4188
Moreover, extraction is far from the way humans write summaries.
abstractive methods are able to generate better summaries with the use of arbitrary words and expressions, but generating abstractive summaries is much more difficult in practice.
contrasting
train_4189
Human languages exhibit a wide range of phenomena, within some limits.
some structures seem to occur or co-occur more frequently than others.
contrasting
train_4190
Typology is a Small-Data Problem.
to many common problems in applied NLP, e.g., part-of-speech tagging, parsing and machine translation, the modeling of linguistic typology is fundamentally a "small-data" problem.
contrasting
train_4191
In this case, "东路" becomes UNK.
the models could infer that "东路" is also a location, from its character composition and neighboring words.
contrasting
train_4192
Recently end-to-end models have outperformed traditional pipeline approaches, predicting syntactic or semantic structure as intermediate steps, on NLU tasks such as sentiment analysis and semantic relatedness (Le and Mikolov, 2014;Kiros et al., 2015), question answering and textual entailment (Rocktäschel et al., 2015).
the linguistic structure used in applications has predominantly been shallow, restricted to bilexical dependencies or trees.
contrasting
train_4193
The attention is computed as with hard attention, as α i j = softmax(u i j ).
instead of making a hard selection, a weighted average over the encoder vectors is computed as This vector is used instead of h a j for prediction and feeding to the next timestep.
contrasting
train_4194
The parser has an option not to be restricted by the ERG.
neither of these approaches have results available that can be compared directly to our setup, or generally available implementations.
contrasting
train_4195
(2014) proposed a twostage parser that first predicts concepts or subgraphs corresponding to sentence segments, and then parses these concepts into a graph structure.
mRS has a large proportion of abstract nodes that cannot be predicted from short segments, and interact closely with the graph structure.
contrasting
train_4196
The arc-eager-based models again give better performance, mainly due to improved concept prediction accuracy.
concept prediction remains the most important weakness of the model; Damonte et al.
contrasting
train_4197
If only some (not all) entities are labeled, it is not effective to learn a sequence labeling model.
every single labeled entity, along with its contexts, may be used to learn the proposed model.
contrasting
train_4198
FFNN is a powerful computation model.
it requires fixed-size inputs and lacks the ability of capturing long-term dependency.
contrasting
train_4199
In terms of statistical significance, our best score on the SemEval dataset (908 samples) is not significant at the 95% confidence level.
the accuracy improvements of PreWin over the common baselines are highly statistically significant with 99.9%+ confidence.
contrasting