id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_21400
BP is guaranteed to converge to an exact solution if the graph is a tree.
exact inference is intractable for general graphs, i.e., graphs with loops.
contrasting
train_21401
Then they trained a linear-CRF model on additional in-domain data, reducing the error up to 22%.
no results on semi-CRF was given.
contrasting
train_21402
Previous work has made significant progress on this task (Chen and Mooney, 2008;Angeli et al., 2010;Konstas and Lapata, 2012).
most approaches solve the two content selection and surface realization subtasks separately, use manual domain-dependent resources (e.g., semantic parsers) and features, or employ template-based generation.
contrasting
train_21403
They employ a probabilistic context-free grammar that specifies the structure of the event records, and then treat generation as finding the best derivation tree according to this grammar.
their method still selects and orders records in a local fashion via a Markovized chaining of records.
contrasting
train_21404
Recently the connection between less formulaic language and simple actions has been explored successfully in the context of simulated worlds (Branavan et al., 2009;Goldwasser and Roth, 2011;Branavan et al., 2011;Artzi and Zettlemoyer, 2013;Andreas and Klein, 2015) and videos (Malmaud et al., 2015;Venugopalan et al., 2015).
to our knowledge, there is no body of work that focuses on understanding the relation between natural language and complex actions and goals or on explaining flexibly the actions taken by a robot in natural language utterances.
contrasting
train_21405
When the blocks are marked with clearly identifiable logos, all models outperform our baselines by a wide margin.
when blocks are blank the situation is flipped.
contrasting
train_21406
Our results show that including inter-document contextual information yields additional improvements to those obtained from inter-sentence information.
as expected, the former are smaller than the latter, as sentences in the same post are more related than sentences in different posts.
contrasting
train_21407
In this paper, we assume that these properties do not correlate and therefore the ultradense subspaces do not overlap, e.g., D s ∩ D c = ∅.
this might not be true for other settings, e.g., sentiment and semantic information.
contrasting
train_21408
If you are thinking of buying a TV for watching football, you might go to websites such as Amazon to read customer reviews on TV products.
there are many products and each of them may have hundreds of reviews.
contrasting
train_21409
In the first step, features are extracted based on association rules or dependency patterns, and in the second step features are grouped into aspects using clustering algorithms.
our method extracts features and groups them at the same time.
contrasting
train_21410
In Naive Bayes (NB) we learn the verb-torelation mapping weights from labeled training instances.
to the other systems, EM allows learning from both labeled and unlabeled instances.
contrasting
train_21411
They also compute heuristic confidences in verb-to-relation mappings from label propagation scores, which are not probabilities.
we map verbs directly to relations, and obtain P (v p |r i ) as an integral part of our EM process.
contrasting
train_21412
Structured knowledge about the world is useful for many natural language processing (NLP) tasks, such as disambiguation, question answering or semantic search.
the extraction of structured information from natural language text is challenging because one relation can be expressed in many different ways.
contrasting
train_21413
While it uses the whole context for convolution, it performs max pooling over the three parts individually.
we propose to split the context even earlier and apply the convolutional filters to each part separately.
contrasting
train_21414
The number of positive examples per slot and year ranges from 0 (org:member of, 2014) to 581 (per:title, 2013), the number of negative examples from 5 (org:website, 2014) to 1886 (per:title, 2013).
to other relation classification benchmarks, this dataset is not based on a knowledge base (such as Freebase) and unrelated text (such as web documents) but directly on the SF assessments.
contrasting
train_21415
1 Factorization (2) relies on the hypothesis that there exists a fixed vector for each candidate answer representing its meaning.
as we argued in Section 1, an entity surface does not possess meaning; rather, it serves as an anchor to link pieces of information about it.
contrasting
train_21416
In contrast, significant degradation in BLEU is observed at 5000wpm for Chinese-to-English, a language pair with complicated reordering requirements -notice that all methods consistently keep a very high distortion limit for this language pair.
both BO-S and BO-D strategies yield better performance on test (at least +0.5BLEU improvement) than the grid and random search baselines.
contrasting
train_21417
The approach offers impressive coverage, avoids issues of distant supervision, and provides a useful exploratory tool.
openIE predictions are difficult to use in downstream tasks that expect information from a fixed schema.
contrasting
train_21418
competitions use distant supervision (Ji and Grishman, 2011).
distant supervision provides noisy training data with many false positives, and this limits the precision of the resulting extractors (see Section 2).
contrasting
train_21419
They added up to 20K instances of crowd data to 1.8M DS instances using sparse logistic regression, tuning the relative weight of crowdsourced and DS training.
they saw only a marginal improvement from F1 0.20 to 0.22 when adding crowdsourced training to DS training, and conclude that human feedback has little impact.
contrasting
train_21420
As a result, it can process various interactions between entity vectors.
it is difficult to process large-scale knowledge graphs with NTN due to its high complexity.
contrasting
train_21421
That is, when a triple (h, r, t) is given, h and t plays different roles.
the existing embeddings treat them equally and embed them into a space in the same way.
contrasting
train_21422
A taskspecific model ( Figure 1) that only learns from indomain annotations supports only (2).
a non-hierarchical joint model ( Figure 4) supports only (1): it learns a single shared w applied to any test pair regardless of task or domain.
contrasting
train_21423
As in the DA experiments, we compute average performance over twenty random train/test splits for each training set size.
figure 6 shows STS results for all models across the individual model does better than in DA: it overtakes the global model with fewer training examples and the differences with the adaptive model are smaller.
contrasting
train_21424
Is it really effective to "cram" whole sentence meanings into fixed-length vectors?
we focus on capturing fine-grained word-level information directly.
contrasting
train_21425
Our pairwise word interaction model shares similarities with recent popular neural attention models (Bahdanau et al., 2014;Rush et al., 2015).
there are important differences: For example, we do not use attention weight vectors or weighted On the Mat There Sit Cats Cats Sit On the Mat Figure 3: The similarity focus layer helps identify important pairwise word interactions (in black dots) depending on their importance for similarity measurement.
contrasting
train_21426
In most bitexts, source and target sentences have roughly the same length.
for our task of aligning text and speech where the speech is represented as a sequence of phones or PLP vectors, the source can easily be several times larger than the target.
contrasting
train_21427
In Table 4 and 5, we can see that though we add a strong hierarchical phrase-based reordering model in the baseline, our model can still bring a maximum gain of 0.59 BLEU score, which suggest that our model is applicable and robust in various circumstances.
we have noticed that the gains in Arabic-English system is relatively greater than that in Chinese-English system.
contrasting
train_21428
Most works either apply manual annotation (Yang et al., 2015) or use existing but small-scale resources such as the Penn Treebank (Chung and Gildea, 2010;Xiang et al., 2013).
we employ an unsupervised approach to automatically build a largescale training corpus for DP generation using alignment information from parallel corpora.
contrasting
train_21429
This makes the input sentences and DP-inserted TM more consistent in terms of recalling DPs.
the above method suffers from a major drawback: it only uses the 1-best prediction result for decoding, which potentially introduces translation mistakes due to the propagation of prediction errors.
contrasting
train_21430
(2012) propose both simple rule-based and manual methods to add zero pronouns in the source side for Japanese-English translation.
the BLEU scores of both systems are nearly identical, which indicates that only considering the source side and forcing the insertion of pronouns may be less principled than tackling the problem head on by integrating them into the SMT system itself.
contrasting
train_21431
In Case A, "Do you" in the translation output is compensated by adding DP 你 (you) in (b), which gives a better translation than in (a).
in case C, our DP generator regards the simple sentence as a compound sentence and insert a wrong pronoun 我 (I) in (b), which causes an incorrect translation output (worse than (a)).
contrasting
train_21432
In (b), when integrating an incorrect 1-best DP into MT, we obtain the wrong translation.
in (c), when considering more DPs (2-/4-/6-best), the SMT system generates a perfect translation by weighting the DP candidates during decoding.
contrasting
train_21433
As with most previous work on coreference resolution, we only consider mentions that are noun phrases.
not all of the noun phrases are mentions.
contrasting
train_21434
If a coreferent mention is classified as non-coreferent, the recall of the coreference resolver that uses the singleton detector will decrease.
recall errors only affect the singleton detector itself and not coreference resolvers.
contrasting
train_21435
For example, Hobbs' algorithm and agreement features are being used successfully in the Stanford system (Lee et al., 2013).
apart from features like these, a large number of linguistically motivated features have been proposed which either do not have a significant impact or are only applicable to a specific language or domain.
contrasting
train_21436
(2015) are incorporated in the Stanford system in a heuristic way: if both anaphor and antecedent are classified as singleton, and none of them is a named entity, then those mentions will be disregarded.
since our Confident model does have a high precision, we use it for removing all non-coreferent mentions in a preprocessing step.
contrasting
train_21437
In recent years, several supervised entity coreference resolution systems have been proposed, which, according to Ng (2010), can be categorized into three classes -mention-pair models (McCarthy and Lehnert, 1995), entity-mention models (Yang et al., 2008a;Haghighi and Klein, 2010;Lee et al., 2011) and ranking models (Yang et al., 2008b;Durrett and Klein, 2013;Fernandes et al., 2014) -among which ranking models recently obtained state-of-the-art performance.
the manually annotated corpora that these systems rely on are highly expensive to create, in particular when we want to build data for resource-poor languages (Ma and Xia, 2014).
contrasting
train_21438
Relation Extraction All LRFR-TUCKER models improve over BASELINE and FCM (Table 3), making these the best reported numbers for this task.
lRFR-CP does not work as well on the features with only one lexical part.
contrasting
train_21439
Summary For unigram lexical features, LRFR n -TUCKER achieves better results than LRFR n -CP.
in settings with fewer training examples, features with more lexical parts (n-grams), or when faster predictions are advantageous, LRFR n -CP does best as it has fewer parameters to estimate.
contrasting
train_21440
Since the size of English vocabulary W may be up to 10 6 scale, hierarchical softmax and negative sampling (Mikolov et al., 2013b) are applied during training to learn the model efficiently.
using CBOW to learn Chinese word embeddings directly may have some limitations.
contrasting
train_21441
This phrase in turn combines with other contextual categories using CCG combinators to form new categories representing larger phrases.
to phrase structure trees, CCG derivation trees encode a richer notion of syntactic type and constituency.
contrasting
train_21442
Automated geolocation of social media messages can benefit a variety of downstream applications.
these geolocation systems are typically evaluated without attention to how changes in time impact geolocation.
contrasting
train_21443
Notably, Monday is significantly harder, with an accuracy of 1.5 standard deviations below the mean.
the hour of the day has much more significant impact on accuracy; some times of the day are significantly easier and harder than the average.
contrasting
train_21444
Major progress has been made in this task in recent years, due primarily to the SemEval Semantic Textual Similarity (STS) task (Agirre et al., 2012;Agirre et al., 2013;Agirre et al., 2014;Agirre et al., 2015).
the utility of top STS systems has remained largely unexplored in the context of short answer grading.
contrasting
train_21445
For three of the language pairs we observed increases in BLEU scores over the baseline for all interlocking methods with substantial gains of 1.9 to 2.6 BLEU points coming from the source interlocking technique.
the German to English pair gave a negative result.
contrasting
train_21446
In the past, the prevalent criterion was to judge the quality of a translation in terms of fluency and adequacy, on an absolute scale (White et al., 1994).
different evaluators focused on different aspects of the translations, which increased the subjectivity of their judgments.
contrasting
train_21447
2015, we only consider a monolingual evaluation scenario and ignore the source text .
our features and experimental setup can be extended to include source-side features.
contrasting
train_21448
In the dependency parsing framework, some previous work incorporated MWE annotations within syntactic trees, in the form of complex subtrees either with flat structures (Nivre and Nilsson, 2004;Eryigit et al., 2011;Seddah et al., 2013) or deeper ones Candito and Constant, 2014).
these representations do not capture deep lexical analyses like nested MWEs.
contrasting
train_21449
This framework looks appealing in order to test our assumption that segmentation and parsing are mutually informative, while leaving the exact flow of information to be learned by the system itself: we do not postulate any priority between the tasks nor that all attachment decisions must be taken jointly.
we expect most decisions to be made independently except for some difficult cases that need both lexical and syntactic knowledge.
contrasting
train_21450
Word-sentiment associations are commonly captured in sentiment lexicons.
most existing manually created sentiment lexicons include only single words.
contrasting
train_21451
Previous work on Automatic Paraphrase Identification (PI) is mainly based on modeling text similarity between two sentences.
we study methods for automatically detecting whether a text fragment only appearing in a sentence of the evaluated sentence pair is important or ancillary information with respect to the paraphrase identification task.
contrasting
train_21452
This uses an alignment approach based on lexical similarity, which may fail to align some text constituents.
these mistakes only affect the precision in extracting ATFs rather than the recall.
contrasting
train_21453
The condition over d is important because the sentence aligner may fail to match some subsequences, creating false ATFs.
what is missed from one sentence will be missed also in the other sen-Train Test τ Ancillary Important Total Ancillary Important Total 1 971 687 1658 387 687 1074 2 426 364 790 166 151 317 3 166 169 335 62 79 141 4 59 73 132 21 36 57 .
contrasting
train_21454
SMT systems generate scored candidates and select a sentence having the highest score as the translation result.
the 1-best result of SMT system is not always the best result because the scoring is conducted only with local features.
contrasting
train_21455
(R. Riggs) We have analyzed stylistic patterns in quotations.
are these patterns characteristic of quotations?
contrasting
train_21456
Our work is also inspired by the recent work in introducing datasets to evaluate question answering and reading comprehension tasks that require reasoning and entailment.
to Richardson et al.
contrasting
train_21457
Desegmentation is usually applied to the 1-best output of the decoder.
this pipeline suffers from error propagation: errors made during decoding cannot be corrected, even when desegmentation results in an illegal or extremely unlikely word.
contrasting
train_21458
Lattice rescoring also involves many steps, requiring one to train and tune a complete segmented system with segmented references, then desegment lattices and compose them with a word LM, and then tune a lattice rescorer on unsegmented references.
our system is implemented as a single decoder feature function in Moses.
contrasting
train_21459
As we will see in Section 9, there is a wealth of existing methods for learning representations that capture context of words in two different languages in the literature.
they have been evaluated on tasks that do not require much semantic analysis, such as translation lexicon induction or document categorization.
contrasting
train_21460
This is an instance of sparse coding, which consists of modeling data vectors as sparse linear combinations of basis elements.
with dimensionality reduction techniques such as PCA, the learned basis vectors need not be orthogonal, which gives more flexibility to represent the data (Mairal et al., 2009).
contrasting
train_21461
Equations 3 and 4 define non-differentiable, nonconvex optimization problems and finding the globally optimally solution is not feasible.
various methods used to solve convex problems work well in practice.
contrasting
train_21462
Cosine similarity is unable to differentiate between these two cases, assigning a high score to both these pairs, causing both of them to be labeled positive.
balAPinc with sparse representations teases them apart by giving a high score to the first pair and a low score to the second.
contrasting
train_21463
The normalization constant is Z(u) = w∈V p(w | u), and V is the vocabulary.
the standard approach for estimating neural language models is maximum liklelihood estimation (MLE), where we learn the parameters θ * that maximize the likelihood of the training data, for each training instance, gradient-based approaches for MLE require a summation over all units in the output layer, one for each word in V .
contrasting
train_21464
We generate candidate properties from the vehicle and event words of a simile.
when the event is a form of "to be" or a perception verb (taste, smell, feel, sound, look), we do not generate candidate properties from the event because the verb is too general.
contrasting
train_21465
Given the nature of the complex storytelling task, the best and most reliable evaluation for assessing the quality of generated stories is human judgment.
automatic evaluation metrics are useful to quickly benchmark progress.
contrasting
train_21466
We find that using a beam size of 1 (greedy search) significantly increases the story quality, resulting in a 4.6 gain in METEOR score.
the same effect is not seen for caption generation, with the greedy caption model obtaining worse quality than the beam search model.
contrasting
train_21467
They demonstrated that asking human to translate these DTPs can bring a significant gain to the overall translation quality compared to translating other phrases.
to our Source 南亚 各国 外长 商讨 自由 贸易区 和 反 恐 问题 (south asian)(countries)(foreign minister)(discuss) (free)(trade zone)(and)(anti)(terrorism)(issue) Ref south asian foreign ministers discuss free trade zone and anti-terrorism issues Baseline south asian foreign ministers to discuss the issue of free trade area and the L2R south asian foreign ministers discuss the issue of free trade area and the PR south asian foreign ministers discuss free trade area and anti-terrorism issues Table 1: Examples of applying the Left-to-right (L2R) framework and the Pick-Revise framework (PR) in modifying a Chinese-English translation.
contrasting
train_21468
But the uses only need to perform one type of actions, which might be more suitable to be performed by a single human translator.
the improvement is relatively small compared to fully simulated results, suggesting that human involvement is still critical for improve the translation quality.
contrasting
train_21469
In this case every translator will focus on a single action, which might be easier to train and may have higher efficiency.
the performance of current framework is still related to the underlying MT system.
contrasting
train_21470
These represent two extremes of the system: consuming the maximum amount of context, which might give the most robust representation of topic semantics, and consuming the minimum amount of context, which gives the most focused representation of topics semantics (and which more generally might allow the system to directly memorize train-test pairs observed in training).
neither performs as well as the combination of all CNN features, showing that the different granularities capture complementary aspects of the entity linking task.
contrasting
train_21471
This way CNNs can efficiently learn to embed input sentences into low-dimensional vector space, preserving important syntactic and semantic aspects of the input sentence.
engineering features spanning two pieces of text such as in QA is a more complex task than classifying single sentences.
contrasting
train_21472
Although the above approaches are very valuable, they required considerable effort to study, define and implement features that could capture relational representations.
we are interested in techniques that try to automatize the feature engineering step.
contrasting
train_21473
(2015) 81 V AE+QE , again align with the state of the art.
our CTK using CH largely outperforms all previous work, e.g., 7.6 points more than CNN R in terms of MRR.
contrasting
train_21474
Our preliminary experiments using word2vec were not successful.
cNNs may provide a more effective similarity.
contrasting
train_21475
Such models can be used to effectively summarize occurrences of patterns in text and aggregate them into a vector representation.
the summary produced is not selective since all pattern occurrences are counted, weighted by how cohesive (non-consecutive) they are.
contrasting
train_21476
's (2011) convolutional model (discussed above) gives 81.47% F1 on the English test set when trained on only the gold data.
by using carefully selected word-embeddings trained on external data, they are able to increase F1 to 88.67%.
contrasting
train_21477
In order to transfer annotations, we align monolingual embeddings between languages.
a full fine-grained alignment is not possible with only ten translation pairs due to differences between the languages and variations across raw corpora from which the embeddings are derived.
contrasting
train_21478
Muralidharan and Hearst, 2013;Pettersson and Nivre, 2011).
these texts differ from contemporary training corpora in a number of linguistic respects, including the lexicon (Giusti et al., 2007), morphology (Borin and Forsberg, 2008), and syntax (Eumeridou et al., 2004).
contrasting
train_21479
(2007) report an increase of about 3% accuracy on adaptation of POS tagging from Modern English texts to Early Modern English texts if the target texts were automatically normalized by the VARD system.
normalization is not always a well-defined problem (Eisenstein, 2013), and it does not address the full range of linguistic changes over time, such as unknown words, morphological differences, and changes in the meanings of words (Kulkarni et al., 2015).
contrasting
train_21480
We did not follow their setting because it would lead to a significant change of test data.
it should be noted that these "errors" are not particularly meaningful for linguistic analysis, and could easily be addressed by heuristic post-processing.
contrasting
train_21481
For this reason, they have the potential of uncovering external relations involving language isolates and tiny language families such as Ainu, Basque, and Japanese.
our understanding of typological changes is far from satisfactory in at least two respects.
contrasting
train_21482
A central problem of the tree model is its assumption that after a branching event, two resultant languages evolve completely independently.
linguists have noted that horizontal contact is a constitutive part of evolutionary history.
contrasting
train_21483
For reasons unknown to us, they chose clustering models that basically assume tree-like evolution (Saitou and Nei, 1987;Bryant and Moulton, 2004).
creole genesis is more comparable to models that explicitly take into account genetic admixture (i.e., contact phenomena).
contrasting
train_21484
Combined with the PCA analysis in Section 4.2, this suggests that Japanese is a very non-creole-like language.
we are unsure if the possibility of creole status for (pre-)Old Japanese is completely rejected.
contrasting
train_21485
more irrelevant words are included in the window.
the negative effect on the accuracy is still relatively small, up to around −0.1 for the models using French and Russian as the second languages, and −0.25 for Czech when setting m = ∞.
contrasting
train_21486
This is unsurprising considering the historical importance of count-based models in which every surface form of a word is a separately modeled entity (English cat and Spanish gato would not likely benefit from sharing counts).
recent models that use distributed representations-in particular models that share representations across languages (Hermann and Blunsom, 2014;Faruqui and Dyer, 2014;Huang et al., 2015;Lu et al., 2015, inter alia)-suggest universal models applicable to multiple languages are a possibility.
contrasting
train_21487
Interpolation of monolingual LMs is an alternative to obtain a multilingual model (Harbeck et al., 1997;Weng et al., 1997).
interpolated models still require a trained model per language, and do not allow parameter sharing at training time.
contrasting
train_21488
One possibility is that order-critical sentences that cannot be disambiguated by a robust conceptual semantics (that could be encoded in distributed lexical representations) are in fact relatively rare.
it is also plausible that current available evaluations do not adequately reflect order-dependent aspects of meaning (see below).
contrasting
train_21489
Finally, we observe that the n-grams feature type turns out to be the most domain-dependent in our evaluation.
both the syntax and the part of speech features appear quite robust across domains.
contrasting
train_21490
Related research has already used the argumentation framework of Dung (1995) to find accepted arguments based on such a graph on a much smaller scale (Cabrio and Villata, 2012a).
the size of the web would allow for recursive analysis of the graph with statistical approaches like the famous PageRank algorithm (Page et al., 1999), enabling an assessment of argument relevance.
contrasting
train_21491
Using larger subgraphs yields higher accuracy, because they capture more structural information.
larger subgraphs can be sparse.
contrasting
train_21492
We assigned 20% of instances to the test split, totalling 378 instances (Section 6).
some of these instances cannot be obtained using predicted role labels: a missing or incorrect semantic role will unequivocally lead to positive interpretations that are not in our corpus and thus evaluation is not straightforward.
contrasting
train_21493
From the three plots of the input gates, we can observe that generally for stop words such as prepositions and articles the input gates have lower values, suggesting that the matching of these words is less important.
content words such as nouns and verbs tend to have higher values of the input gates, which also makes sense because these words are generally more important for determining the final relationship label.
contrasting
train_21494
(2006) about body words not providing additional value in their task classification work.
the GrausB(VP-TOI) in row (7) shows that using body terms more selectively has the potential for improving performance.
contrasting
train_21495
Syntax-based embeddings have been shown to have different properties in word similarity evaluations than their window based counterparts, better capturing the functional properties of words.
it is not clear if they provide any advantage for NLP tasks.
contrasting
train_21496
In genearal, we observe that the CNN performs better with Wavg, while SVM and LSTM with Conc.
the ensemble methods of the Win5 model (AvgE and ConcE) do not provide any consistent advantage over the baseline.
contrasting
train_21497
Therefore, the model actually splits the sentence locally into n-grams by sliding windows.
despite their ability to account for word orders, order-sensitive models based on neural networks still suffer from several disadvantages.
contrasting
train_21498
As the number of word embedding increases, this will increase the running time.
this tuning procedure is embarrassingly parallel.
contrasting
train_21499
Our model is similar in spirit to topic models: for an input dataset, the output of the RMN is a set of relationship descriptors (topics) and-for each relationship in the dataset-a trajectory, or a sequence of probability distributions over these descriptors (document-topic assignments).
the RMN uses recent advances in deep learning to achieve better control over descriptor coherence and trajectory smoothness (Section 4).
contrasting