id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14400
Gesmundo and Henderson (2014) also consider the rankings between partial translation pairs as well.
they evaluate a partial translation through extending it to a complete translation by re-decoding, and thus they need many passes of decoding for many partial translations, while ours only need one pass of decoding for all partial translations and thus is much more efficient.
contrasting
train_14401
Of course, learning synchronous grammars from parallel data is a widely studied problem (Wu, 1997;Blunsom et al., 2008;Levenberg et al., 2012, inter alia).
there has been less exploration of learning rich non-terminal categories, largely because previous efforts to learn such categories have been coupled with efforts to learn derivation structures-a computationally formidable challenge.
contrasting
train_14402
It is these rules that allow the right translation to be preferred since the MLE chooses not to place the object of the sentence in the subject's span.
the spectral parameters seem to discriminate between these higherlevel rules better than EM, which scores spans starting with the first word uniformly highly.
contrasting
train_14403
(2008) present a Bayesian model for synchronous grammar induction, and place an appropriate nonparametric prior on the parameters.
their starting point is to estimate a synchronous grammar with multiple categories from parallel data (using the word alignments as a prior), while we aim to refine a fixed grammar with additional latent states.
contrasting
train_14404
1 For example, the gender of the employee Kay Mann was marked as unknown in their gender assignment.
in our work, we manually research and determine the gender of every core employee.
contrasting
train_14405
This is because the other features in fact make quite different predictions depending on gender and/or gender environment.
the content features (and in particular the lexical features) are so powerful on their own that the relative contribution of the gender-based features decreases again.
contrasting
train_14406
Twitter-LDA, which assumes a single tweet consists of a single topic, has been proposed and has shown that it is superior in topic semantic coherence.
twitter-LDA is not capable of online inference.
contrasting
train_14407
(2011) show that it works well at the point of semantic coherence of topics compared with LDA.
as with the case of LDA, Twitter-LDA cannot consider a sequence of tweets because it assumes that samples are exchangeable.
contrasting
train_14408
In Twitter-LDA, π is common for all users, meaning that the rate between background and topic words is the same for each user.
this assumption could be incorrect, and the rate could differ for each user.
contrasting
train_14409
Their model learns the transition parameters among topics by minimizing the prediction error on topic distribution in subsequent tweets.
the TM-LDA does not consider dynamic word distributions.
contrasting
train_14410
An online variational Bayes algorithm for LDA is also proposed (Hoffman et al., 2010).
these methods are based on LDA and do not consider the shortness of a tweet.
contrasting
train_14411
Similarly to Feldman and Peng (2013), out starting point is that idioms are semantic outliers that violate cohesive structure, especially in local contexts.
our task is framed as supervised classification and we rely on data annotated for idiomatic and literal expressions.
contrasting
train_14412
If a target query appears in a similar semantic context, the topics will be able to describe this query as well.
one might similarly apply LDA to a given query to extract query topics, and create the query vector from the query topics.
contrasting
train_14413
The time complexity of PTK is O(pρ where p is the largest subsequence of children that one wants to consider and ρ is the maximal out-degree observed in the two trees.
the average running time again tends to be linear for syntactic trees (Moschitti, 2006).
contrasting
train_14414
There are cleverer ways to reduce the complexity (e.g., see (Huang and Chiang, 2005) for three such ways).
since the efficiency of the algorithm did not limit us to produce k-best parses for larger k, it was not a priority in this work.
contrasting
train_14415
Since in our problem a pair of hypotheses h i , h j constitutes a data instance, we now need to define the kernel between the pairs.
notice that DISCTK only works on a single pair.
contrasting
train_14416
Note that more important relational features would be the subtree patterns extracted from the DT.
they are already generated by TKs in a simpler way.
contrasting
train_14417
These approaches achieve state-of-the-art performance mainly relying on morphosyntactic and lexical factors.
consider the following example.
contrasting
train_14418
Hence, the system is able to correctly identify some mentions even in the presence of parsing or preprocessing errors.
as a result, IMSCoref has to process many spurious mentions, which makes learning more difficult.
contrasting
train_14419
They report preliminary results using the CoNLL scorer.
we think the coreference resolution system and the evaluation metric for coreference resolution are not suitable for bridging resolution since bridging is not a set problem.
contrasting
train_14420
Therefore it is difficult for the learning-based approach to learn effective rules to predict bridging links.
all learning-based systems tend to have higher recall but lower precision compared to the rule-based system.
contrasting
train_14421
The models are considerably better at resolving explicit referrals (both non-spatial and spatial) compared to implicit ones.
for locational referrals, the difference between the accuracy of implicit and explicit REs is significant (75.2% vs. 56.6% in media and 86.2% vs. 56.7% in places).
contrasting
train_14422
For example, the Rhetorical Structure Theory (RST) (Mann and Thompson, 1988) represents a discourse as a tree with phrases or clauses as elementary discourse units (EDUs).
rST ignores the importance of connectives to a great extent.
contrasting
train_14423
Example 1shows an explicit reason relation signaled by the discourse connective "particularly if" and an implicit result relation represented by the inserted discourse connective "so", with Arg1 in italics and Arg2 in bold.
as a connective and its arguments are determined in a local contextual window, it is normally difficult to deduce a complete discourse structure from such a connective-argument scheme.
contrasting
train_14424
First, the pair-wise classifier approaches learn a classifier on mention pairs (edges) (Soon et al., 2001;Ng and Cardie, 2002;Bengtson and Roth, 2008), and perform some form of approximate decoding or post-processing using the pair-wise scores to make predictions.
the pair-wise classifier approach suffers from several drawbacks including class imbalance (fewer positive edges compared to negative edges) and not being able to leverage the global structure (instead making independent local decisions).
contrasting
train_14425
Indeed, several of the approaches that have achieved state-of-the-art results on OntoNotes fall under this category Björkelund and Kuhn, 2014).
their efficiency requirement leads to a highly nonrealizable learning problem.
contrasting
train_14426
Specifically, when learning F prune in the worst case there can be ambiguity about which of the non-optimal actions to retain, and for only some of those an effective F score can be found.
we observe a loss decomposition in terms of the individual losses due to F prune and F score , and develop a stage-wise learning approach that first learns F prune and then learns a corresponding F score .
contrasting
train_14427
We essentially employ the same features as in the Easyfirst system.
we provide some highlevel details that are necessary for subsequent discussion.
contrasting
train_14428
The simplest method is to use a bag-of-words representation derived from the text description.
this scheme disregards the ordering of words and the finer nuances of meaning that evolve from composing words into sentences and paragraphs.
contrasting
train_14429
One solution to this problem is to approximate Q(s, a) using a parametrized function Q(s, a; θ), which can generalize over states and actions by considering higher-level attributes (Sutton and Barto, 1998;Branavan et al., 2011a).
creating a good parametrization requires knowledge of the state and action spaces.
contrasting
train_14430
The simplest method to create these minibatches from the experience memory D is to sample uniformly at random.
certain experiences are more valuable than others for the agent to learn from.
contrasting
train_14431
The game has an additional quest of reaching a secret tomb.
this is a complex quest that requires the player to memorize game events and perform high-level planning which are beyond the scope of this current work.
contrasting
train_14432
Interestingly, DIST2REF does also show some cultural effects in its geolocation errors: For example, some Pacific island states with lesser-known identities (e.g., Nauru and French Polynesia) are placed in the Indian Ocean, where we find the perhaps prototypes of beautiful islands, like Seychelles and Mauritius; also, Central American countries (such as Panama, El Salvador, and Nicaragua) move towards their "cultural center of gravity", South America.
this kind of cultural bias is much more prominent in the original WORD2VEC distributional representation.
contrasting
train_14433
The broader goal of getting at referential information with distributional semantics is shared with Herbelot (2015).
the specific approach is different, as she constructs vectors for individual entities (literary characters) by contextualizing generic noun vectors with distributional properties of those entities.
contrasting
train_14434
The corresponding grammars for recognizing and producing graphs are more flexible and powerful than tree grammars.
because of their high complexity, graph grammars have not been widely used in NLP.
contrasting
train_14435
Recently, along with progress on graph-based meaning representation, hyperedge replacement grammars (HRG) (Drewes et al., 1997) have been revisited, explored and used for semantic-based machine translation (Jones et al., 2012).
the translation process is rather complex and the resources it relies on, namely abstract meaning corpora, are limited as well.
contrasting
train_14436
ITGs recognize only the binarizable permutations, which is a major restriction when used on the data: there are many nonbinarizable permutations in actual data (Wellington et al., 2006).
our PETs are obtained by factorizing permutations obtained from the data, i.e., they exactly fit the range of prime permutations in the parallel corpus.
contrasting
train_14437
Therefore, we accept the rewritten sentence.
when the subject phrase is long and the object phrase is short, a swap may not reduce delay.
contrasting
train_14438
This work is also related to preprocessing reordering approaches (Xu et al., 2009;Collins et al., 2005;Galley and Manning, 2008;Hoshino et al., 2013;Hoshino et al., 2014) in batch MT for language pairs with substantially different word orders.
our problem is different in several ways.
contrasting
train_14439
@RachOrange (California) This tweet is a positive example from the USA toward Pakistan.
a typical sentiment classifier misclassifies this as negative because miss and sad express sadness.
contrasting
train_14440
Much of the existing work focuses on annotating a single Twitter message (tweet).
information in Twitter is rarely digested in isolation, but rather in a collective manner, with the adoption of special mechanisms such as hashtags.
contrasting
train_14441
While the content-based methods (Meij et al., 2012;Guo et al., 2013;Fang and Chang, 2014) consider tweets independently, the graph-based methods (Cassidy et al., 2012;Liu et al., 2013) use all related tweets (e.g., posted by a user) together.
most of them focus on entity mentions in tweets.
contrasting
train_14442
The external information derived from the tweets is largely ignored.
we exploit both context information from the microblog and Wikipedia resources.
contrasting
train_14443
In that work, the author proposes graph-based models and achieves fair amount of improvement.
to the best of our knowledge, no previous work of this task tries to focus on summarization beyond pure sentence extraction.
contrasting
train_14444
However, to the best of our knowledge, no previous work of this task tries to focus on summarization beyond pure sentence extraction.
cross-language summarization can be seen as a special kind of machine translation: translating the original documents into a brief summary in a different language.
contrasting
train_14445
Since the inception of BLEU, evaluation of automatic metrics in MT has been by correlation with human assessment.
in summarization, over the years since the introduction of ROUGE, summarization evaluation has seen a variety of different methodologies applied to evaluation of its metrics.
contrasting
train_14446
As described in Section 2, when used for the evaluation of metrics, linguistic quality is commonly omitted, however, with metrics only assessed by the degree to which they correlate with human coverage scores.
we include all available human assessment data for evaluating metrics.
contrasting
train_14447
As mentioned earlier, there have been numerous studies that used data from the public Twitter feeds.
none of the datasets in those studies focused on tweets and related articles linked to these tweets.
contrasting
train_14448
We found that it did not match with the titles.
even though there are no exact matches, there might still be matches where the tweet is a slight modification of the headline of the article, and can be measured using a partial match measure.
contrasting
train_14449
A further approach removed the requirement of seed lexicons, and induced lexicons using bilingual spaces spanned by multilingual probabilistic topic models (Vulić et al., 2011;Liu et al., 2013;Vulić and Moens, 2013b).
these models require document alignments as initial bilingual signals.
contrasting
train_14450
The MEN dataset that Kiela and Bottou (2014) evaluate on explicitly measures word relatedness.
the current lexicon learning task seems to require something else than relatedness: whilst a chair and table are semantically related, a translation for chair is not a good translation for table.
contrasting
train_14451
This could explain why we did not see increased performance on the bilingual lexicon induction task with additional layers.
the increase in performance on the relatedness task is relatively minor, and further investigation is required into the utility of the additional layers for relatedness tasks.
contrasting
train_14452
The ever increasing user generated content has always been motivation for sentiment analysis research, but majority of work has been done for English Language.
in recent years, there has been emergence of increasing amount of text in Hindi on electronic sources but NLP Frameworks to process this data is sadly miniscule.
contrasting
train_14453
Joint models of relevance and subjectivity have a great benefit in that they have a large degree of freedom as far as controlling redundancy goes.
conventional two-stage approach Pang and Lee (2004), which first generate candidate subjective sentences using min-cut and then selects top subjective sentences within budget to generate a summary, have less computational complexity than joint models.
contrasting
train_14454
In contrast, conventional two-stage approach Pang and Lee (2004), which first generate candidate subjective sentences using min-cut and then selects top subjective sentences within budget to generate a summary, have less computational complexity than joint models.
two-stage approaches are suboptimal for text summarization.
contrasting
train_14455
For example, when we select subjective sentences first, the sentiment as well information content may become redundant for a particular aspect.
when we extract sentences first, an important subjective sentence may fail to be selected, simply because it is long.
contrasting
train_14456
Much fine-grained analysis is span or aspect based (Yang and Cardie, 2014;Pontiki et al., 2014).
this work contributes to entity/event-level sentiment analysis.
contrasting
train_14457
While the targets in aspect-based sentiment analysis are often entity targets, they are mainly product aspects, which are a predefined set.
3 the target in the entity/event-level task may be any noun or verb.
contrasting
train_14458
Previously, we also propose a set of sentiment inference rules and develop a rule-based system to infer sentiments .
the rule-based system requires all information regarding explicit sentiments and +/-effect events to be provided as oracle information by manual annotations.
contrasting
train_14459
Since the exact boundaries of the spans are hard to define even for human annotators (Wiebe et al., 2005a;Yang and Cardie, 2013), the target span in MPQA 2.0 could be a single word, an NP or VP, or a text span covering more than one constituent.
in MPQA 3.0, each target is anchored to the head of an NP or VP, which is a single word.
contrasting
train_14460
This shows the results from span-based sentiment analysis systems do not provide enough accurate information for the more fine-grained entity/eventlevel sentiment analysis task.
pSL1 achieves much higher accuracy than the baselines.
contrasting
train_14461
The benefit of the word2vec text feature is clear when moving from high-level categories to original terms from descriptions, where it consistently improves the mean rank (up to 25%).
the indicator vectors resulted in a less significant improvement, if not worse performance, when using the sparse original terms.
contrasting
train_14462
For concave optimization problems like IBM Model 1, we have guarantees on the convergence of optimization algorithms such as Expectation Maximization (EM).
as was pointed out recently, the objective of IBM Model 1 is not strictly concave and there is quite a bit of alignment quality variance within the optimal solution set.
contrasting
train_14463
Summarizing (Moore, 2004), we note that this work improves substancially upon the classical IBM Model 1 by introducing a set of heuristics, among which are to (1) modify the lexical parameter dictionaries (2) introduce an initialization heuristic (3) modify the standard IBM 1 EM algorithm by introducing smoothing (4) tune additional parameters.
we stress that the main concern of this work is not just heuristicbased empirical improvement, but also structured learning.
contrasting
train_14464
They use these vectors as input to a logistic regression classifier and achieve state-of-the-art performance on sentiment classification of movie reviews.
they did not consider the effect of this model modification directly on the task of language modelling.
contrasting
train_14465
Continuously retraining the model and adjusting parameters can be very time-consuming compared to a simple feedforward process through the network.
extra computation is also needed when using a hidden vector of size M , as opposed to using a smaller value.
contrasting
train_14466
Usually unsupervised knowledge sources are used to form semantic codes of the labels that helps us to generalize to unseen labels.
there are also different ways to express the same meaning, and similarly, most of them can not be included in the training set.
contrasting
train_14467
(2014), the authors presented a simple technique for speeding up feed-forward embedding-based neural network models, where the dot product between each word embedding and part of the first hidden layer are pre-computed offline.
this technique cannot be used for hidden layers beyond the first.
contrasting
train_14468
Perhaps most surprisingly, the additive function performs as well as the max function, despite the fact that it provides no additional modeling power compared to a 1-layer network.
it does allow the model to generalize better than a 1-layer network by explicitly tying together two or three hidden nodes from each node in the output layer.
contrasting
train_14469
Fortunately, most structured margin objectives are convex, so a range of optimization methods with similar theoretical properties are available -in short, any of these methods will work in the end.
in practice, how fast each method converges varies across tasks.
contrasting
train_14470
In such cases, most learning methods work fairly well.
when models use real-valued features, learning may involve determining a more delicate balance between features.
contrasting
train_14471
The dual methods were particularly sensitive to these hyperparameters, performing poorly if they were not chosen carefully.
performance for the primal methods remained high over a broad range of values.
contrasting
train_14472
In order to reduce noise in training data, most natural language crowdsourcing annotation tasks gather redundant labels and aggregate them into an integrated label, which is provided to the classifier.
aggregation discards potentially useful information from linguistically ambiguous instances.
contrasting
train_14473
Percentage Agreement In this paper, we follow Beigman Klebanov and Beigman (2014) in using the nominal agreement categories Hard Cases and Easy Cases to separate instances by item agreement.
unlike Beigman Klebanov and Beigman (2014) who use simple percentage agreement, we calculate item-specific agreement via Krippendorff (1970)'s α item agreement 4 , with Nominal, Ordinal, or Ratio distance metrics as appropriate.
contrasting
train_14474
did not significantly outperform Integrated.
highAgree does outperform Integrated on 4 or the 5 tasks, especially for hard Cases: hard Case improvements for Biased Language and POS Tagging, and Affective Text, and overall improvements for RTE, POS Tagging, and Affective Text were significant (Paired TTest, p < 0.05, for numerical output, or McNemar's Test 10 (McNemar, 1947), p < 0.05, for nominal classes).
contrasting
train_14475
As a consequence, we need our gold ranking to define an order on all the word pairs.
this also means that we somehow need to order completely unrelated word pairs; for example, we have to decide whether (dog, cat) is more similar than (banana, apple).
contrasting
train_14476
All embeddings perform better than guessing, indicating that there is at least some coherent structure captured in all of them.
the best performing embeddings at this task are TSCCA, CBOW and GloVe (the precision mean differences were not significant under a random permutation test), while TSCCA attains greater precision (p < 0.05) in relation to C&W, H-PCA and random projection embeddings.
contrasting
train_14477
These results are in contrast to the direct comparison study, where the performance of TSCCA was found to be significantly worse than that of CBOW.
the order of the last three embeddings remains unchanged, implying that performance on the intrusion task and performance on the direct compari- son task are correlated.
contrasting
train_14478
Extrinsic evaluations use embeddings as features in models for other tasks, such as semantic role labeling or part-of-speech tagging (Collobert et al., 2011), and improve the performance of existing systems (Turian et al., 2010).
they have been less successful at other tasks such as parsing (Andreas and Klein, 2014).
contrasting
train_14479
LDA is an unsupervised model-it requires no annotation-and discovers, without any supervision, the thematic trends in a text collection.
lDA's lack of supervision can lead to disappointing results.
contrasting
train_14480
With SparseLDA, inferring LDA models over large topic spaces becomes tractable.
existing methods for incorporating prior knowledge use conventional Gibbs sampling, which hinders inference.
contrasting
train_14481
1 Therefore, the posterior topic assignments v and w will be correlated.
if v and w are uncorrelated, nothingother than the Dirichlet's rich get richer effectprevents the topics from diverging.
contrasting
train_14482
their topic probabilities are correlated.
a cannot-link relation between two words indicates that these two words are not topically similar, and they should not both be prominent within the same topic.
contrasting
train_14483
Path queries on a knowledge graph can be used to answer compositional questions such as "What languages are spoken by people living in Lisbon?".
knowledge graphs often have missing facts (edges) which disrupts path queries.
contrasting
train_14484
For the small dataset, it is clear that the data is insufficient for the model to learn a good tagset mapping, especially for a morphologically rich language like Czech.
with more data, the model is better able to learn the tagset mapping as part of joint training.
contrasting
train_14485
Compared with N-gram models, syntactic models capture rich structural information, and can be more effective in improving the fluency of large constituents, long-range dependencies and overall sentential grammaticality.
syntactic models require annotated syntactic structures for training, which are expensive to obtain manually.
contrasting
train_14486
The syntactic model requires that the training sentences have syntactic dependency structure.
only the WSJ data contains goldstandard annotations.
contrasting
train_14487
It conforms to the intuition that syntactic quality affects the fluency of surface texts.
the influence is not huge, the BLEU scores decrease by 1.0 points as the parsing accuracy decreases from 88.10% to 57.31% The influence of parsing accuracy of the training data on cross-domain word ordering is measured by using the same training settings, but testing on the WPB and SANCL test sets.
contrasting
train_14488
This suggests that it is possible to use large automatically-parsed data to train syntactic models.
when the training data scale increases, syntactic models can become much slower to train compared with Ngram models.
contrasting
train_14489
This again shows the effect of syntactic quality of the training data.
as the scale of automaticallyparsed AFP data increases, the performance of the syntactic model rapidly increases, surpassing the syntactic model trained on the high-quality WSJ data.
contrasting
train_14490
This can be explained by the local scoring nature of the N-gram model.
the syntactic model makes less long-range distortions, which can suggest better sentence structure.
contrasting
train_14491
The syntactic model performs better in most constituent labels.
the N-gram model performs better in WHPP, SBARQ and WHNP.
contrasting
train_14492
It would be overly expensive to obtain a human oracle on discusses.
according to Papineni (2002), a BLEU 7 For the combined model, we used the WSJ training data for training, because the syntactic model is slower to train using large data.
contrasting
train_14493
Generally the models are good at picking out key words from the input, such as names and places.
both models will reorder words in syntactically incorrect ways, for instance in Sentence 7 both models have the wrong subject.
contrasting
train_14494
Since DP is the extension of finite mixture models to the nonparametric setting, the appropriate tool for nonparametric topic models is HDP.
both LDA and HDP are normally suitable for long documents.
contrasting
train_14495
As expected, for pairs 1 → 2, 3 → 4, and 5 → 6, all the scores are below their corresponding upper bounds from the in-domain setting in Table 3.
for pair 7 → 8, the QWK score for domain adaptation with 100 target essays outperforms that of the in-domain, albeit only by 0.4%.
contrasting
train_14496
For example, common instances of TE are rephrases or summarizations of a sentence, however they cannot serve to support a claim within a discussion, as they merely repeat it (Table 1, S6).
an anecdotal story may have strong emotional impact that will effectively support a claim during a discussion, although the truth of the claim cannot be inferred from such evidence.
contrasting
train_14497
In addition, some works based on machinelearning techniques, used the same topic in training and testing (Rosenfeld and Kraus, 2015;Boltužić anď Snajder, 2014), relying on features from the topic itself in identifying arguments.
here, we focus on detecting an essential constituent of an argumentthe evidence -rather then detecting whole arguments, or detecting other argument parts like claims Lippi and Torroni, 2015).
contrasting
train_14498
Since, to the best of our knowledge, this is the first work to address CDED, there is no prior-art to compare our results to.
to ensure that this task is indeed empirically different from related tasks, and demands a specialized pipeline to handle, we compare with two baselines that are often used in related tasks.
contrasting
train_14499
The classification of query chains is performed by support vector machines, and its training data is generated in a supervised fashion by manual inspection and annotation.
we do not manually annotate any of our training data.
contrasting