id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_15400
Interestingly, the improvements from monolingual data are additive to the gains from ensembling of 3 models with different random seeds.
the use of synthetic parallel data still outperforms our approach both in single and ensemble systems.
contrasting
train_15401
While separating out a language model allowed us to carry out multi-task training on mixed data types, it constrains gradients from monolingual data examples to a subset of source-independent network parameters (σ).
synthetic data always affects all network parameters (θ) and has a positive effect despite source sequences being noisy.
contrasting
train_15402
With this additional gating mechanism, the final syntactic GCN computation is formulated as The inability of GCNs to capture dependencies between nodes far away from each other in the graph may seem like a serious problem, especially in the context of SRL: paths between predicates and arguments often include many dependency arcs (Roth and Lapata, 2016).
when graph convolution is performed on top of LSTM states (i.e., LSTM states serve as input to GCN) rather than static word embeddings, GCN may not need to capture more than a couple of hops.
contrasting
train_15403
The link embedding i for the ith question token is computed as: e ∈Eτ ∪{∅} exp s(e , i) For WIKITABLEQUESTIONS, we ran the entity embedding and linking module over every entity.
this approach may be prohibitively expensive in applications with a very large number of entities.
contrasting
train_15404
Dynamic programming on denotations (DPD) is an automatic procedure for enumerating logical forms that execute to produce a particular value; it leverages the observation that there are fewer denotations than logical forms to enumerate this set relatively effi-ciently.
many of these logical forms are spurious, in the sense that they do not represent the question's meaning.
contrasting
train_15405
(2015) explores visual concept learning from few examples, and presents encouraging results for one-shot learning by learning representations over Bayesian programs.
none of these address the issue of learning from natural language.
contrasting
train_15406
We provide more details in Section 3.3.
the probability of the latent statement evaluation values z can be parametrized using a probabilistic semantic parsing model (with associated parameters θ p ).
contrasting
train_15407
If the length of d is 1, all the weights in W d have value 1, and γ is a linear activation, then this formula is equivalent to a regular cosine similarity.
we use a larger length for d to capture more features, use tanh as the activation function, and optimise the weights of W d during training, giving the framework more flexibility to customise the model for the task of metaphor detection.
contrasting
train_15408
Techniques have been developed to categorize questions based on the nature of these information needs in the context of the TREC QA challenge (Harabagiu et al., 2000), and to identify questions asking for similar information (Shtok et al., 2012;Zhang et al., 2017;Jeon et al., 2005); questions have also been classified by topic (Cao et al., 2010) and quality (Treude et al., 2011;Ravi et al., 2014).
our work is not concerned with the information need central to QA applications, and instead focuses on the rhetorical aspect of questions.
contrasting
train_15409
Types 6 and 7 are much more combative: in type 6 questions the asker explicitly attempts to force the minister to concede/accept a point that would undermine some government stance, while type 7 contains condemnatory questions that prompt the minister to justify a policy that is self-evidently bad in the eyes of the asker.
type 2 constitutes tamer narrow queries that require the minister to simply report on non-partisan matters of policy.
contrasting
train_15410
11 In the unanswerable and unanswered tasks, we find that the BOW features do not perform significantly better than a random (50%) baseline.
the latent question features produced by our framework bring additional predictive signal and outperform the baseline when combined with BOW (binomial p < 0.05), achieving accuracies of 66% and 62% respectively (compared with 55% and 50% for BOW alone).
contrasting
train_15411
These methods typically require historical data for fitting model parameters, and may be sensitive to issues such as concept drift (Fung, 2014).
our approach does not rely on historical data for training; instead we forecast outcomes of future events by directly extracting users' explicit predictions from text.
contrasting
train_15412
With the increasing number of hops, the performance improves.
when the number of hops is larger than 3, the performance decreases due to overfitting.
contrasting
train_15413
training on multiple datasets has shown promising results (Collobert and Weston, 2008).
multitask learning requires access to the emoji dataset whenever the classifier needs to be tuned for a new target task.
contrasting
train_15414
Note that word coverage can be a misleading metric in this context as for many of these small datasets a word will often occur only once in the training set.
all of the words in the pretraining vocabulary are present in thousands (if not millions) of observations in the emoji pretraining dataset thus making it possible for the model to learn a good representation of the emotional and semantic meaning.
contrasting
train_15415
A naive model of yielding v C could utilise the attention mechanism over h t , deriving a weighted sum according to user information.
dynamic memory networks have been shown highly useful for deriving abstract semantic information compared with simple attention, and hence we follow Sukhbaatar et al.
contrasting
train_15416
A straightforward approach to predicting the rating score of a product is to take the average of existing review scores.
the drawback is that it cannot reflect the the variance in user tastes.
contrasting
train_15417
In order to integrate user preferences into the rating, we instead take a user-based weighted average of existing rating scores, so that the scores of reviews that are closer to the user preference are given higher weights.
existing ratings can be all different from a users personal rating, if the existing reviews do not come from the user's neighbours.
contrasting
train_15418
Constraint tightening: In subproblem P 2 , we consider a vertex and all of its adjacent arcs of positive weight.
we know that our optimal solution must satisfy tree-shape constraints (5).
contrasting
train_15419
Pop (2002) also used Lagrangian relaxation -in the non directed case -where a single subproblem is solved in polynomial time.
the relaxed constraints are inequalities: if the dual objective returns a valid primal solution, it is not a sufficient condition in order to guarantee that Table 3: Dependency parsing and tagging results.
contrasting
train_15420
Theoretically, the refinement can be made until there is no update in the scoring matrix.
experimental results show that comparable performance can be achieved with no more than twice high-order refinements (see Section 3).
contrasting
train_15421
Their greedy algorithm breaks the parsing into a sequence of local steps, which correspond to choosing the head for each modifier word (one arc at a time) in the bottom-up order relative to the current tree.
we employed the global inference algorithm to change the entire tree (all at a time) in each refinement step, which makes the improvement more efficient.
contrasting
train_15422
We believe this is because unsupervised parsers have relatively low accuracy and forcing them to reconcile would not lead to better parses.
joint decoding during training helps propagate useful inductive biases between models and thus leads to better trained models.
contrasting
train_15423
their published hyperparameters and preprocessing.
rather than selecting the final model based on reranking performance, we instead perform early stopping based on development set perplexity.
contrasting
train_15424
Parameterized by the encoding parameters Λ and the reconstruction parameters Θ, our NCRF-AE consists of the encoder and the decoder, which together forms the log-likelihood a highly non-convex function.
a careful observation shows that if we fix the encoder, the lower bound derived in the E step, is convex with respect to the reconstruction parameters Θ in the M step.
contrasting
train_15425
This system will fail to capture non-projective TAG derivation structures.
as noted in Section 2, there is almost no non-projectivity in TAG derivation structures of English.
contrasting
train_15426
WSJ Section 00 does not have any non-projective sentences.
wSJ Sections 01-22 contain 0.6% of non-projective sentences in dependency grammar (Chen and Manning, 2014), an order of magnitude more than non-projectivity for TAG.
contrasting
train_15427
Setup 3 considers the most query Figure 3: Entity-relation table entity pairs in total since multi-token entities are split into their comprising tokens.
setup 3 represents a more realistic scenario than setup 1 or setup 2 because in most cases, entity boundaries are not given.
contrasting
train_15428
6 Our results are best comparable with since we use the same setup and traintest splits.
their model is more complicated with a lot of hand-crafted features and various iterations of modeling dependencies among entity and relation classes.
contrasting
train_15429
However, their model is more complicated with a lot of hand-crafted features and various iterations of modeling dependencies among entity and relation classes.
we only use pre-trained word embeddings and train our model end-to-end with only one iteration per entity pair.
contrasting
train_15430
This demonstrates the strength of neural representation learning for end-to-end relation extraction.
miwa and Bansal (2016)'s model is trained locally, without considering structural correspondences between incremental decisions.
contrasting
train_15431
First, Miwa and Bansal (2016) rely on external syntactic parsers for obtaining syntactic information, which is crucial for relation extraction (Culotta and Sorensen, 2004;Zhou et al., 2005;Bunescu and Mooney, 2005;Qian et al., 2008).
parsing errors can lead to encoding inaccuracies of tree-LSTMs, thereby hurting relation extraction potentially.
contrasting
train_15432
Our method can avoid the problem since we do not compute parser outputs.
the computation complexity is largely reduced by using our method since sequential LSTMs are based on inputs only, while the dependency path LSTMs should be computed based on the dynamic entity detection outputs.
contrasting
train_15433
Embedding methods have shown state-ofthe-art results on several benchmark datasets.
by construction, these benchmark datasets differ from data in real KGs.
contrasting
train_15434
First, benchmark datasets have largely been restricted to the most frequently occurring entities in the KG.
in most KGs, entities are associated with a sparse set of observations.
contrasting
train_15435
Through sampling, FB15K rebalances Freebase, increasing the diversity of entities and relations.
wordNet and wN18 have similar diversity statistics.
contrasting
train_15436
A sparse, high-quality set of extractions may be insufficient to learn meaningful embeddings.
the benefit of incorporating additional, unreliable facts may also be questionable.
contrasting
train_15437
Unspecialized word embeddings (Mikolov et al., 2013;Pennington et al., 2014) capture general semantic properties of words, but are unable to differentiate between different types of semantic relations (e.g., vectors of car and driver might be as similar as vectors of car and vehicle).
we often need embeddings to be similar only if an exact lexico-semantic relation holds between the words.
contrasting
train_15438
Distributional and path-based models have been used to discriminate between multiple lexicosemantic relations, including hypernymy and meronymy, at once .
as pointed out by (Chersoni et al., 2016), distributional vectors and scores based on their comparison fail to discriminate between multiple relation types at once.
contrasting
train_15439
(2015) propose dataset splits with no lexical overlap between the train and test portions.
model's performance in a lexically-split setting is an overly pessimistic estimate of models' true performance -in a realistic scenario, the model will occasionally make predictions for pairs involving some of the concepts from the training set.
contrasting
train_15440
Although existing RE systems have achieved promising results with the help of distant supervision and neural models, they still suffer from a major drawback: the models only learn from those sentences contain both two target entities.
those sentences containing only one of the entities could also provide useful information and help build inference chains.
contrasting
train_15441
(Zeng et al., 2014;dos Santos et al., 2015) adopt CNN in this task, and (Zeng et al., 2015;Lin et al., 2016) combine attention-based multiinstance learning which shows promising results.
these above models merely learn from those sentences which directly contain both two target entities.
contrasting
train_15442
Hence we only apply dropout to s i for PCNN.
even with a dropout rate of 0.5, RNN still performs well.
contrasting
train_15443
(2010) have applied it to create relation extraction datasets for a large-scale KB.
to our dataset, their data contains a single relation instance per sentence.
contrasting
train_15444
Much effort has been made in reducing the influence of noisy sentences within the bag, including methods based on at-least-one assumption (Hoffmann et al., 2011;Ritter et al., 2013;Zeng et al., 2015) and attention mechanisms over instances (Lin et al., 2016;Ji et al., 2017).
the sentence level denoise methods can't fully address the wrong labeling problem largely because they use a hard-label method in which the labels of entity pairs are immutable dur-ing training, no matter whether they are correct or not.
contrasting
train_15445
Wrong corrections like Case 1 fail to distinguish similar relations (both Nationality and place of birth are relations between people and locations) between entities because of their similar sentence patterns.
wrong corrections like Case 1 are rare (5/39) in our experiments.
contrasting
train_15446
ResNet has won the ImageNet ILSVRC 2015 classification task, and achieved state-of-theart performances in many computer vision tasks.
the effect of residual learning on noisy natural language processing tasks is still not well understood.
contrasting
train_15447
Empirically, we evaluate on the NYT-Freebase dataset (Riedel et al., 2010), and demonstrate the state-of-the-art performance using deep CNNs with identify mapping and shortcuts.
to popular beliefs in vision that deep residual network only works for very deep CNNs, we show that even with a moderately deep CNNs, there are substantial improvements over vanilla CNNs for relation extraction.
contrasting
train_15448
Some relation types are more lexical by nature: relations such as dog is an animal; a teacher works at a school; a car is kept at a parking lot, can be identified out of context.
many relations are contextual; they are time-anchored or tied to extra-linguistic, situational context.
contrasting
train_15449
The input-output combination method still has an advantage, and concatenation and multiplication also perform well.
the advantages over the baseline are less significant than when the number of clusters was identical to the standard.
contrasting
train_15450
Our measurement approach can also be defined as 'local' to some extent: the linear projections that we learn are mostly based and evaluated on the nearest neighborhood data.
this method is different in that its scope is not single words but pairs of typed entities ('location' and 'armed group' in our case) and the semantic relations between them.
contrasting
train_15451
The cumulative baseline results are slightly better, probably simply because they are trained on more data.
they still perform much worse than the models trained using incremental updates.
contrasting
train_15452
If a token appears in the ingredients, we set z v = 1 and z v = 0 otherwise.
we can train the model in a fully supervised fashion, i.e., we can obtain the probability of it may be not be accurate.
contrasting
train_15453
Recurrent Neural Network (RNN) language models recently outperformed traditional n-gram LMs across a range of tasks (Jozefowicz et al., 2016).
an important practical issue associated with such neural-network LMs is the high computational cost incurred.
contrasting
train_15454
The NEG objective function is considered a simplification of the NCE's objective, unsuitable for learning language models (Dyer, 2014).
in this study, we show that despite its simplicity, it can be used in a principled way to effectively train a language model, based on PMI matrix factorization.
contrasting
train_15455
Much research has been done on subword-level and subword-aware 1 neural language modeling when subwords are characters (Ling et al., 2015b;Kim et al., 2016;Verwimp et al., 2017) or morphemes (Botha and Blunsom, 2014;Qiu et al., 2014;Cotterell and Schütze, 2015).
not much work has been done on syllable-level or syllable-aware NLM.
contrasting
train_15456
These models can be spiced up with the most recent regularization techniques for RNNs (Gal and Ghahramani, 2016) to reach state-of-the-art.
to make our results directly comparable to those of Kim et al.
contrasting
train_15457
It seems that syllable-aware language models fail to outperform competitive character-aware ones.
usage of syllabification can reduce the total number of parameters and increase the training speed, albeit at the expense of languagedependent preprocessing.
contrasting
train_15458
The core of our model closely corresponds to the structure of the RMN (as technically described in Section 3.2).
we provide the model with distinct types of informative input for each view, and, reformulate the objective in a way that jointly optimizes parameterizations of all views to encode distinct information (cf., Section 3.3).
contrasting
train_15459
We therefore restrict our quantitative analysis to the Amazon data set.
we also report qualitative results on the Gutenberg corpus, demonstrating that our model induces meaningful novel representations for corpora of varying size and diversity.
contrasting
train_15460
Publicly-available labeled datasets are scarce for many NLP tasks, and crowdsourcing services such as Amazon Mechanical Turk 1 (AMT) offer researchers a quick, inexpensive means of labeling their data.
workers employed by these services are typically unfamiliar with the annotation tasks, and they may have little motivation to perform high-quality work due to factors such as low pay and anonymity.
contrasting
train_15461
To illustrate such a case, consider the example in Figure 3: In our study, all 5 workers incorrectly selected yes as answer.
(perhaps somewhat counterintuitively to nonexperts) under the PropBank paradigm it is the "phone representative" that provide explicit help in this sentence, not "Vanguard."
contrasting
train_15462
In particular, even at similar levels of expert involvement, it outperforms the HYBRID all5 approach.
we also note that with an F 1 -score of 0.94, our approach does not yet reach the quality of gold annotated data.
contrasting
train_15463
We focused on maximizing the accuracy of dependency parsing on the development data in our experiments.
the sizes of the training data are different across the different tasks; for example, the semantic tasks include only 4,500 sentence pairs, and the dependency parsing dataset includes 39,832 sentences with word-level annotations.
contrasting
train_15464
Task-oriented learning of low-level tasks Each task in our JMT model is supervised by its corresponding dataset.
it would be possible to learn low-level tasks by optimizing highlevel tasks, because the model parameters of the low-level tasks can be directly modified by learning the high-level tasks.
contrasting
train_15465
At first sight, this task appears formidable, as it would imply that a bilingual semantic space can be constructed by using monolingual corpora only.
the existence of structural isomorphism across monolingual embedding spaces points to the feasibility of this task: The transformation exists right there only to be discovered by the right tool.
contrasting
train_15466
For example, we choose c = L 2 2 for the EMDOT approach to obtain a closed-form solution to the subprogram (10), otherwise we would have to use gradient-based solvers.
the WGAN approach calls for c = L 2 because the Kantorovich-Rubinstein duality takes a simple form only in this case.
contrasting
train_15467
Similar to ours, the underlying idea is to match cross-lingually at the level of distribution rather than word.
the distributions considered in that work are the hidden states of neural embedding models during the course of training.
contrasting
train_15468
Each decoding step in the unfolded network computes a single attention weight vector.
ensemble decoding would compute one attention weight vector for each of the K input models.
contrasting
train_15469
Srinivas and Babu (2015) propose to add the outgoing weights of j to the weights of a similar neuron i to compensate for the removal of j.
we have found that this approach does not work well on NMT networks.
contrasting
train_15470
0.8 BLEU better than the single system (a)).
using the data-bound version of our shrinking algorithm (Sec.
contrasting
train_15471
For example, unlike recursive neural networks (Socher et al., 2013), GCNs do not require the graphs to be trees.
in this work we solely focus on dependency syntax and leave more general investigation for future work.
contrasting
train_15472
(2016) use convolution in both the encoder and the decoder; they make use of dilation to increase the receptive field.
to both approaches, we use a GCN informed by dependency structure to increase it.
contrasting
train_15473
In simultaneous translation/interpretation, which has recently been studied in the context of neural machine translation (Gu et al., 2016), the decoding objective is formulated as a trade-off between the translation quality and delay.
when a machine translation system is used as a part of a larger information extraction system, it is more important to correctly translate named entities and events than to translate syntactic function words.
contrasting
train_15474
Psycholinguistic Features: Psychological differences are useful for our problem, because professional journalists tend to express opinion conservatively to avoid unnecessary arguments.
satirical news includes aggressive language for the entertainment purpose.
contrasting
train_15475
The task is mostly based on lexical cues and specific reporting verbs that are the signal for the majority of direct quotations.
in the case of quotation attribution the task is to find the source, cue, and content of the quotation, whereas in our case, for a given citing paragraph and reference we simply assess which text fragment is covered by the reference.
contrasting
train_15476
When comparing the sequence classifier CSP S to the plain classifier CSP C, we see a marginal difference of 1.3% for F1.
it will become more evident later that classifying jointly the text fragments for the different span buckets, outperforms the plain classification model.
contrasting
train_15477
CSP C achieves a similar performance in this case.
for the cases where the span is at the sub-sentence level or across multiple sentences, the performance of baselines drops drastically.
contrasting
train_15478
For larger spans, for instance, for (1, 2], we are still slightly better, with roughly 3% less erroneous span when comparing CSP C and CS.
only in the case of spans with > 2, we perform below the CS baseline.
contrasting
train_15479
Despite, the smaller erroneous span, the CS baseline never includes more than one sentence, and as such it does not include many erroneous spans for the larger buckets.
it is by definition unable to recognize any longer spans.
contrasting
train_15480
For instance, Daxenberger and Gurevych (2012) categorized edits based on whether edits affect the text meaning, resulting in syntactic edit categories such as file deletion, reference modification, etc.
simply understanding the syntactic revision operation types does not provide the information we seek: why do editors do what they do?
contrasting
train_15481
We found similar trends for interactions between previous quality and elaboration and verification, which are essential for articles in the starting stages.
the positive slopes for simplification, wikification and process suggest that, as articles increase in quality, simplifying articles' content, adding proper links or reorganizing their structure becomes more important.
contrasting
train_15482
In a semi-supervised setting, our method of reusing (V)RAE encodings in a reading comprehension framework is effective, with SWEAR-PR reaching an accuracy of 66.5 on 1% of the dataset against last year's state of the art of 71.8 using the full dataset.
these methods require careful configuration and tuning to succeed, and making them more robust presents an excellent opportunity for future work.
contrasting
train_15483
For language, the direct analogue would be to paraphrase the input (Madnani and Dorr, 2010).
high-precision paraphrase generation is challenging, as most edits to a sentence do actually change its meaning.
contrasting
train_15484
Standard evaluation is overly lenient on models that rely on superficial cues.
adversarial evaluation reveals that existing models are overly stable to perturbations that alter semantics.
contrasting
train_15485
In recent years, many methods have been proposed for commonsense machine comprehension.
these methods mostly either focus on matching explicit information in given texts (Weston et al., 2014;Wang and Jiang, 2016a,b;Zhao et al., 2017), or paid attention to one specific kind of commonsense knowledge, such as event temporal relation (Chambers and Jurafsky, 2008;Modi and Titov, 2014;Pichotta and Mooney, 2016b;Hu et al., 2017) and event causality (Do et al., 2011;Radinsky et al., 2012;Hashimoto et al., 2015;Gui et al., 2016).
contrasting
train_15486
To identify whether a hypothesis is reasonable, we need to consider all possible inferences.
in human reasoning process, not all inference rules have the same possibility to be applied, because the more reasonable inference will be proposed more likely.
contrasting
train_15487
Rosenthal and McKeown (2012) set out to explore this direction by conducting cross-domain experiments for detecting claims in blog articles from LiveJournal and discussions taken from Wikipedia.
they focused on relatively similar datasets that both stem from the social media domain and in addition annotated the datasets themselves, leading to an identical conceptualization of the notion of claim.
contrasting
train_15488
For the related important sentence detection task in text summarization, therefore propose a two-stage approach (Lee and Liu, 2003;Elkan and Noto, 2008) to augment the set of known summaryworthy sentences.
we adopt a conservative approach rather than predict too many sentences as being question-worthy: we pair up source sentences with their corresponding questions, and use just these sentence-question pairs to training the encoder-decoder model.
contrasting
train_15489
In fact, the notion of "attention" has gained popularity recently in neural network modeling, which has improved the performance of many tasks such as machine translation (Bahdanau et al., 2015;Luong et al., 2015).
very few previous works employ attention mechanism to tackle MDS.
contrasting
train_15490
These features help them select better category-specific content for the summary.
our model is basically unsupervised.
contrasting
train_15491
However, few previous works employ attention mechanism to tackle the unsupervised MDS problem.
our attention-based framework can generate summaries for multi-document summarization We propose a cascaded neural attention based unsupervised salience estimation method for compressive multi-document summarization.
contrasting
train_15492
Celikyilmaz and Hakkani-Tur (2010) estimated scores for sentences based on their latent characteristics using a hierarchical topic model, and trained a regression model to extract sentences.
they only use the latent topic information to conduct the sentence salience estimation for extractive summarization.
contrasting
train_15493
However, they only use the latent topic information to conduct the sentence salience estimation for extractive summarization.
our purpose is to model and learn the latent structure information from the target summaries and use it to enhance the performance of abstractive summarization.
contrasting
train_15494
For example, our result for S(1) "Wuhan wins men's soccer title at Chinese city games" matches the "Who Action What" structure.
the standard decoder StanD ignores the latent structures and generates some loose sentences, such as the results for S(1) "Results of men's volleyball at Chinese city games" does not catch the main points.
contrasting
train_15495
Some existing studies employ parallel corpora as artificial reference summaries (Woodsend and Lapata, 2010;Cheng and Lapata, 2016).
preparing such large volumes of reference summaries manually is sometimes costly.
contrasting
train_15496
With respect to the predictor component in the proposed model, we use an encoder-decoder architecture modeled by recurrent neural networks (Kim et al., 2016) based on recent neural extractive summarization approaches (Cheng and Lapata, 2016;Nallapati et al., 2017).
our summarization framework is applicable to all models of sentence extraction using distributed representation as inputs.
contrasting
train_15497
The proposed neural multi-task learning model, NN-ML, is significantly inferior to NN-SE and LREG.
nn-ML-CL, the proposed model with curriculum learning, is superior to all other models.
contrasting
train_15498
This result shows that merely introducing multi-task learning does not positively influence on sentence extraction.
curriculum learning overcomes the difficulty of multitask learning; thus, document classification has positive effects on sentence extraction.
contrasting
train_15499
Table 5 shows that 85.7% of sentences extracted using NN-ML-CL correspond to changes of revenues and profits.
only 40.0% of sentences extracted by NN-SE correspond to these parameters, which indicates that document classification supports extraction of sentences related to the revenue and profit change, and contributes to the improvement.
contrasting