id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_3200
Yokogawa (2002) describes a system for detecting the presence of puns in Japanese text.
this work is concerned only with puns which are both imperfect and ungrammatical, relying on syntactic cues rather than the lexical-semantic information we propose to use.
contrasting
train_3201
The Lesk model had an accuracy of 56%, which is lower than that of a naïve polysemy model which simply selects the punchline with the highest mean polysemy (66%) and even of a random-choice baseline (62%).
it should be stressed here that the Lesk model did not directly account for the possibility that any given word might be ambiguous.
contrasting
train_3202
Following prior work on domain adaptation (Blitzer et al., 2006), high-frequent features (unigrams/bigrams) common to both domains are referred to as domain-independent features or pivots.
we use non-pivots to refer to features that are specific to a single domain.
contrasting
train_3203
For example, in sentiment classification, words such as excellent or terrible would express similar sentiment about a product irrespective of the domain.
if a pivot expresses different semantics in source and the target domains, then it will be surrounded by dissimilar sets of non-pivots, and reflected in the first criteria.
contrasting
train_3204
Despite their impressive performance, existing methods for word representation learning do not consider the semantic variation of words across different domains.
as described in Section 1, the meaning of a word vary from one domain to another, and must be considered.
contrasting
train_3205
Finally, the learnt projection matrix is used to find the nearest neighbors in the source domain for each target domain-specific features.
unlike our proposed method, their method does not learn domain-specific word representations, but simply uses co-occurrence counting when creating in-domain word representations.
contrasting
train_3206
This work demonstrates the importance of considering the domain specificity of word senses.
the focus of their work is not to learn representations for words or their senses in a domain, but to construct glossaries.
contrasting
train_3207
In particular, the number of senses per word type is automatically estimated.
their method is limited to a single domain, and does not consider how the representations vary across domains.
contrasting
train_3208
1) is inspired by the prior work on word representation learning for a single domain (Collobert et al., 2011).
unlike the multilayer neural network in Collobert et al.
contrasting
train_3209
Spectral Feature Alignment (SFA) (Pan et al., 2010) and Structural Correspondence Learning (SCL) (Blitzer et al., 2007) are the state-ofthe-art methods for cross-domain sentiment classification.
those methods do not learn word representations.
contrasting
train_3210
The prevailing methods for the computation of a vector space representation are based on distributional semantics (Harris, 1954).
these approaches, whether in their conventional co-occurrence based form (Salton et al., 1975;Turney and Pantel, 2010;Landauer and Dooley, 2002), or in their newer predictive branch (Collobert and Weston, 2008;Mikolov et al., 2013;Baroni et al., 2014), suffer from a major drawback: they are unable to model individual word senses or concepts, as they conflate different meanings of a word into a single vectorial representation.
contrasting
train_3211
There have been several efforts to adapt and apply distributional approaches to the representation of word senses (Pantel and Lin, 2002;Brody and Lapata, 2009;Reisinger and Mooney, 2010;Huang et al., 2012).
none of these techniques provides representations that are already linked to a standard sense inventory, and consequently such mapping has to be carried out either manually, or with the help of sense-annotated data.
contrasting
train_3212
Babelfy (Moro et al., 2014) is an approach with state-ofthe-art performance that relies on random walks on BabelNet multilingual semantic network (Navigli and Ponzetto, 2012a) and densest subgraph heuristics.
the approach is limited to the WSD and Entity Linking tasks.
contrasting
train_3213
The resulting performance drops have often been addressed via various domain adaptation approaches (Blitzer et al., 2006;Daume III and Marcu, 2006;Reichart and Rappoport, 2007;Chen et al., 2009;Daumé et al., 2010;Chen et al., 2011;Plank and Moschitti, 2013;Hovy et al., 2015b, inter alia).
the authors and target demographics of social media differ radically from those in newswire text, and domain might in some case be a secondary effect to demographics.
contrasting
train_3214
In a standard "improvement over baseline"-setup, this would be problematic.
the results should not be interpreted with respect to their absolute value on the respective tasks, but with respect to the relative differences.
contrasting
train_3215
With the exception of Hotels and Fashion Accessories, the two distributions are almost bimodal opposites.
they are still significantly correlated (Spearman ρ is 0.49 at p < 0.01).
contrasting
train_3216
For many user-generated content settings, this is realistic, since demographic information is available.
we only predict the target variable (sentiment, topic, or author attribute).
contrasting
train_3217
We test for significance with the standard cutoff of p < 0.05.
even under a bootstrapsampling test, we can only limit the number of likely false positives.
contrasting
train_3218
That leaves us with 790,061 data points for further analysis.
in our semantic model, function words are not affected by the ∆ semantic similarity adjustment and are therefore not analyzable for the effect of semantically-weighted trigram predictability.
contrasting
train_3219
Latent variable topic models, such as Latent Dirichlet Allocation (Blei et al., 2003), are popular approaches for automatically discovering topics in document collections.
learning models that capture the large numbers of distinct topics expressed in today's corpora is challenging.
contrasting
train_3220
A hierarchical model can make fine-grained distinctions where data is plentiful, and back-off to more coarse-grained distinctions where data is sparse.
current hierarchical models are hindered by computational complexity.
contrasting
train_3221
The example includes simplifications we also utilize in our experiments, namely that all nodes at a given depth in the tree have the same number of children and the same δ value.
the inference techniques we present in Section 4 are applicable to any tree T and set of coefficients {δ a }.
contrasting
train_3222
Both LDA with collapsed sampling and SBTDM share an advantage in space Algorithm 1 Compute the sampling distribution for a product of two multinomials with SBT priors with Q(z) = 1 for all child c non-zero for ar and a l do ic ← INTERSECT(ar.c, a l .c) i.children += ic τ (i) += τ (ic) end for return i end function complexity: the model parameters are specified by a sparse set of non-zero counts denoting how often tokens of each word or document are assigned to each topic.
in general the sampling distribution for SBTDM has non-uniform probabilities for each of L different latent variable values.
contrasting
train_3223
However, in general the sampling distribution for SBTDM has non-uniform probabilities for each of L different latent variable values.
thus, even if many parameters are zero, a naive approach that computes the complete sampling distribution will still take time linear in L. in SBts the sampling distribution can be constructed efficiently using a simple recursive algorithm that exploits the structure of the tree.
contrasting
train_3224
The average test log-likelihoods per sentence for these two WSME models are −494 and −509 respectively.
the W-ERs from using the trained WSME models in hypothesis re-ranking are not as poor as would be expected from their PPLs.
contrasting
train_3225
It can be seen that Gaussian LDA is a clear winner, achieving an average 275% higher score on average.
we are using embeddings trained on Wikipedia corpus itself, and the PMI measure is computed from co-occurrence in the Wikipedia corpus.
contrasting
train_3226
We also noticed that there were certain words ('don', 'writes', etc) which often came as a top word in many topics in classic LDA.
our model was not able to capture the 'space' topics which LDA was able to identify.
contrasting
train_3227
We used lexical, POS, syntactic and discourse-based information in the form of treelike structures to learn to differentiate better from worse translations.
in that work we used convolution kernels, which is computationally expensive and does not scale well to large datasets and complex structures such as graphs and enriched trees.
contrasting
train_3228
This works for closely related languages, e.g., the English word "new" is translated as "neu" in German and "nueva" in Spanish.
this fails when two languages are not closely related, e.g., Chinese/English.
contrasting
train_3229
The time complexity of computing the gradient is O(V e V f ).
significant speedups can be achieved by precomputing v e v T f and exploiting GPUs for Matrix operations.
contrasting
train_3230
Given the reordering framework described above, we could try to directly predict the executions as Miceli Barone and Attardi 2013attempted with their version of the framework.
the executions of a given sentence can have widely different lengths, which could make incremental inexact decoding such as beam search difficult due to the need to prune over partial hypotheses that have different numbers of emitted words.
contrasting
train_3231
At top level, this is defined as the previous RNN.
the x(j) and x o (j) vectors, in addition to the feature vectors described as above now contain also the final states of another recurrent neural network.
contrasting
train_3232
Deception detection has been formulated as a supervised binary classification problem on single documents.
in daily life, millions of fraud cases involve detailed conversations between deceivers and victims.
contrasting
train_3233
A common assumption in previous research was that a member is more likely to show a positive attitude toward other members in the same group, and a negative attitude toward the opposing groups (Abu-Jbara et al., 2012a).
a deceiver may pretend to be innocent by supporting those truth-tellers and attacking his teammates, whose identities have already been exposed.
contrasting
train_3234
Various abstractive approaches have been proposed till date (Nenkova et al., 2011).
these methods suffer from severe deficiencies.
contrasting
train_3235
The higher ROUGE scores imply that WikiKreator is generally able to retrieve useful information from the web, synthesize them and present the important information in the article.
it may also be noted that the Extractive system outperforms the Perceptron framework.
contrasting
train_3236
Our ILPbased abstractive summarization system fuses and selects content from multiple sentences, thereby aggregating information successfully from multiple sources.
lexRank 'extracts' the top 5 sentences that results in some information loss.
contrasting
train_3237
In that approach, users enter natural language queries in the middle of an existing program; this query drives a search for programs that are relevant to the query and fit within the surrounding program.
the function used to score derivations is a simple matching heuristic relying on the overlap between query terms and program identifiers.
contrasting
train_3238
The underspecified descriptions challenge assumptions in synchronous grammars: much of the target structure is implied rather than stated.
the classification method performs quite well.
contrasting
train_3239
Finally, it ranks the translations according to the weights of word-meaning pairs and the weights of the CCG parse trees.
test sentences may contain words which were not present in the training set.
contrasting
train_3240
Most previous work of text normalization on informal text made a strong assumption that the system has already known which tokens are non-standard words (NSW) and thus need normalization.
this is not realistic.
contrasting
train_3241
Short text messages or comments from social media websites such as Facebook and Twitter have become one of the most popular communication forms in recent years.
abbreviations, misspelled words and many other non-standard words are very common in short texts for various reasons (e.g., length limitation, need to convey much information, writing style).
contrasting
train_3242
This is also the reason why in Table 3, the performance of 3-way classification is significantly better than that of the two-step method using all the features.
we also find that when we only use lexical features (2∼10), the two methods have similar performance on Test set 2, but the two-step method has much better performance than the 3-way classifier method on Test set 1.
contrasting
train_3243
It matches the pair of strings (B): each aligned pair of wildcards is substituted in source and target sentences by the same word and string patterns of (A) can indeed be turned into pairs of substrings of the sentences.
it cannot match the pair of sentences (C) in the original kb-SRK.
contrasting
train_3244
For example, it is unlikely that replacing a word in a pattern of a rewriting rule by one of its holonyms will yield a semantically similar rewriting rule, so holonym would not be a good pattern type for most applications.
it can be very useful in a rewriting rule to type a wildcard link with the relation holonym, as this provides constrained semantic roles to the linked wildcards in the rule, thus holonym would be a good variable type.
contrasting
train_3245
": the first entails the second.
the pair "Former French president General Charles de Gaulle died in November.
contrasting
train_3246
In the Flickr-100M dataset, tags are assigned to images and videos in the form of sets of words, rather than grammatically coherent sentences.
the roles that individual words play are still discernible from their visual context, as manifested by the other words in a given set.
contrasting
train_3247
hair or fabric) are emphasized by the visual features.
the model based on visual features alone performs poorly on the dataset of Keller and Lapata (2003).
contrasting
train_3248
Taking the argument classes produced by the linguistic model as a basis and then re-ranking them to incorporate visual statistics helps to avoid the above problem for the interpolated models, whose output corresponds to grammatical relations.
static interpolation weights (emphasizing linguistic features over the visual ones for all verbs equally) outperformed the predicate-driven interpolation technique, attaining correlations of r = 0.548 and r = 0.476 respectively.
contrasting
train_3249
In the future, it would be interesting to derive the information about predicate-argument relations from low-level visual features directly.
to our knowledge, reliably mapping images to actions (i.e.
contrasting
train_3250
Existing methods for Japanese predicate argument structure (PAS) analysis identify case arguments of each predicate without considering interactions between the target PAS and others in a sentence.
the argument structures of the predicates in a sentence are semantically related to each other.
contrasting
train_3251
In this respect, C-BOW extends the distributional hypothesis (Harris, 1954) that words with similar context distributions should have similar meanings to longer sequences.
the word combinations of C-BOW are not natural linguistic constituents, but arbitrary n-grams (e.g., sequences of 5 words with a gap in the middle).
contrasting
train_3252
(Moschitti, 2006), where p is the largest subsequence of children that we want to consider and ρ is the maximal outdegree observed in the two trees.
the average running time tends to be linear for natural language syntactic trees (Moschitti, 2006).
contrasting
train_3253
This kernel works in the space of the union of the sets of all subtrees from the upper and lower trees, e.g.
: such features cannot capture the relations between the constituents (or semantic lexical units) from the two trees.
contrasting
train_3254
: However, such features cannot capture the relations between the constituents (or semantic lexical units) from the two trees.
these are essential to learn the relation between the two entire sentences 2 .
contrasting
train_3255
Moreover the unavailability of the used resources and the opacity of the used rules have also made such systems very difficult to replicate.
the models we propose enable researchers to: (i) build their system without the use of specific resources.
contrasting
train_3256
These systems are also adaptable and easy to replicate, but they are subject to an exponential computational complexity and can thus only be used on very small datasets (e.g., they cannot be applied to the MSR Paraphrase corpus).
the model we proposed in this paper can be used on large datasets, because its kernel complexity is about linear (on average).
contrasting
train_3257
To some extent, this can be seen a specific kind of context information.
they ignore the label dependence by directly applying Binary Relevance to overcome the multi-label classification difficulty.
contrasting
train_3258
But, in both cases, we can consider them as a user's like.
among the expressions of attitudes where the source is the user, some of them do not refer to a like or dislike.
contrasting
train_3259
The disfluent structure of the sentence could thus be integrated to our syntactic and semantic rules.
the automatic detection of disfluencies is still an open challenge, in particular in the case of edit disfluencies where the speaker corrects or alters the utterance or abandons it entirely and starts over (Strassel, 2004).
contrasting
train_3260
Our system offers a first step in the integration of the interaction context by considering jointly the user's utterance and the previous agent's one that allow us to correctly analyse a large scale of expressions.
the system and the annotators have to focus on the APs without considering the preceding speech turns, which can cause disagreements not only between the system outputs and the human annotations, but also between the human annotators.
contrasting
train_3261
(2010) were able to improve near state-of-the-art systems for several tasks, by simply plugging in the learned word representations as additional features.
because these features are estimated by minimizing the prediction errors made on a generic, unsupervised, task they might be suboptimal for the intended purposes.
contrasting
train_3262
• We show that the features derived from crowdsourced transcriptions perform as well as crowd grades in predicting expert grades.
crowd grades add additional predictive value.
contrasting
train_3263
Further, among the crowdsourcing approaches, we find that the crowd-grades (Model RR-3) does equivalently well (and sometimes worse) than the model using features derived from the crowdsourced speech (Model RR-4).
when we combine all the features from crowdsourcing including the crowd grades, we find much better prediction accuracy (r = 0.76).
contrasting
train_3264
Their results show that the former model can recognize MWEs with F1=71.1%, while the latter can significantly improve parsing accuracy and robustness in general.
the authors admit that "it remains to be seen how much of theoretically possible improvement can be realized when using automatic methods for MWU recognition".
contrasting
train_3265
They often are pregrouped as words-with-spaces in many parsing architectures (Sag et al., 2002).
we did not use gold tokenization, unrealistic for ambiguous MWEs (Nivre and Nilsson, 2004 Table 7: MORPH link prediction for de+DET constructions: precision of global majority baseline, precision of individual per-construction baseline, precision of Green et al.
contrasting
train_3266
The location of an argument on syntactic tree provides an intermediate tag for improving the performance.
building this syntactic tree also introduces the prediction risk inevitably.
contrasting
train_3267
This pipeline addressed the data sparsity by initializing the model with word embeddings which is trained from large unlabeled text corpus.
the convolution layer is not the best way to model long distance dependency since it only includes words within limited context.
contrasting
train_3268
Without y (t−1) term, the rnn model returns to the feed forward form.
people often met with two difficulties.
contrasting
train_3269
In all curves, performance degrades with increased sentence length.
the performance gain of our model over the baseline model is larger for longer sentences.
contrasting
train_3270
Smaller d s means that it is easy to make prediction that long history is unnecessary.
large d s results in a difficult prediction that long historical information is needed.
contrasting
train_3271
On one hand, LSTM network is capable of capturing the long distance dependency especially in its deep form.
the traditional feature templates are only good at describing the properties in neighborhood and a small mistake in syntactic tree will results in large deviation in SRL tagging.
contrasting
train_3272
In practice, however, our model does not work well if it is only trained on the manually annotated Treebank data sets.
when pre-trained on a large amount of automatically parsed data and then fine-tuned on the Treebank data sets, our model achieves a fairly large improvement in performance.
contrasting
train_3273
Third, we utilize the Dropout strategy to address the overfitting prob-lem.
different from Hinton et al.
contrasting
train_3274
(2013) built a recursive neural network for constituent parsing.
rather than performing full inference, their model can only score parse candidates generated from another parser.
contrasting
train_3275
Our model also requires a parser to generate training samples for pre-training.
our system is different in that, during testing, our model performs full inference with no need of other parsers.
contrasting
train_3276
(2012) propose a static postparsing analysis to categorise groups of bracket errors in constituency parsing into higher level error classes such as clause attachment.
this cannot account for cascading changes resulting from repairing errors, or limitations which may prevent the parser from applying a repair.
contrasting
train_3277
McDonald and Nivre (2011) perform an indepth comparison of the graph-based MSTparser and transition-based MaltParser.
malt-Parser uses support vector machines to deterministically predict the next transition, rather than storing the most probable options in a beam like ZPar.
contrasting
train_3278
PPs and coordination have high effective constraint percentages relative to the other error classes for both parsers.
they are also amongst the most isolated errors, with only 0.3% and 0.4% ∆u for MSTparser and ZPar respectively.
contrasting
train_3279
Both parsers make few root attachment errors, though MSTparser is less accurate than ZPar.
root constraints provide the largest UAS improvement per number of constraints for both parsers.
contrasting
train_3280
Feature-based discriminative supervised models have achieved much progress in dependency parsing (Nivre, 2004;Yamada and Matsumoto, 2003;McDonald et al., 2005), which typically use millions of discrete binary features generated from a limited size training data.
the ability of these models is restricted by the design of features.
contrasting
train_3281
The combination is relative simple and its correctness can be measured with the final representation of the non-terminal node (Socher et al., 2013a).
for dependency parsing, all combinations of the head h and its children c i (0 < i ≤ K) are important to measure the correctness of the subtree.
contrasting
train_3282
When k is larger, the number of negative samples also needs to multiply increase for training.
we just can obtain at most k negative samples from the k-best outputs of the base parser.
contrasting
train_3283
Specific to the re-ranking model, Le and Zuidema (2014) proposed a generative re-ranking model with Inside-Outside Recursive Neural Network (IORNN), which can process trees both bottom-up and top-down.
iORNN works in generative way and just estimates the probability of a given tree, so iORNN cannot fully utilize the incorrect trees in k-best candidate results.
contrasting
train_3284
Previous models rely heavily on richer syntactic information through lexicalizing rules, splitting categories, or memorizing long histories.
enriched models incur numerous parameters and sparsity issues, and are insufficient for capturing various syntactic phenomena.
contrasting
train_3285
A popular parsing algorithm is a cubic time chartbased dynamic programming algorithm that uses probabilistic context-free grammars (PCFGs).
pCFGs learned from treebanks are too coarse to represent the syntactic structures of texts.
contrasting
train_3286
Inspired by CVG (Socher et al., 2013), we differentiate the matrices for each non-terminal (or POS) label X rather than using shared parameters.
our model differs in that the parameters are untied on the basis of the left hand side of a rule, rather than the right hand side, because our model assigns a score discriminatively for each action with the left hand side label X unlike a generative model derived from PCFGs.
contrasting
train_3287
As can be seen, the greater word representation dimensions are generally helpful for both WSJ and CTB on the closed development data (dev), which may match with our intuition that the richer syntactic and semantic knowledge representation for each word is required for parsing.
overfitting was observed when using a 32-dimension hidden vector in both tasks, i.e., drops of performance on the open test data (test) when m = 1024, probably caused by the limited generalization capability in the smaller hidden state size.
contrasting
train_3288
If the NP her goals is a likely ARG1 of realized the parser should prefer the main clause structure.
if the NP is a likely ARG0 of an (as yet unseen) embedded verb, then the parser should go for the subordinate clause structure.
contrasting
train_3289
Fringes capture the fact that in an incremental derivation, a prefix tree can only be combined with an elementary tree at a limited set of nodes.
for instance, the prefix tree in figure 1 has two substitution nodes, for B and C. only substitution into B leads to a valid new prefix tree; if we substitute into C, we obtain the tree in figure 1b, which is not a valid prefix tree (i.e., it represents a non-incremental derivation).
contrasting
train_3290
Volokh and Neumann (2008) use a variant of Nivre's (2007) incremental shift-reduce parser and rely only on the current word and previous content to output partial dependency trees; then they output role labels given the full parser output.
to all the joint approaches, we perform both parsing and semantic role labeling strictly incrementally, without having access to the whole sentence, outputting prefix trees and iSRL triples for every sentence prefix.
contrasting
train_3291
A category-based evaluation of discontinuous constituents reveals that EX-TENDED has an advantage over DISCO when considering all constituents.
we can also see that the DISCO features yield better results than EXTENDED particularly on the frequent discontinuous categories (NP, VP, AP, PP), which indicates that the information about gap type and gap length is useful for the recovery of discontinuities.
contrasting
train_3292
EXTENDED and DISCO yields an improvement on all constituents.
now not only DISCO, but also EXTENDED lead to improved scores on discontinuous constituents.
contrasting
train_3293
The parsing speed on the test set drops to around 39 sentences per second.
we achieve 75.10 F 1 , i.e., a slight improvement over the experiments in Tab.
contrasting
train_3294
In comparison to the baseline setting of the shift-reduce parser with beam size 8, the results are around 10 points worse.
rparse reaches an F 1 of 26.61 on discontinuous constituents, which is 5.9 points more than we achieved with the best setting with our parser.
contrasting
train_3295
The key difference is that Chen and Manning (2014) is a local classifier that greedily optimizes each action.
zhang and Nivre (2011) leverage a structured-prediction model to optimize whole sequences of actions, which correspond to tree structures.
contrasting
train_3296
The model size of ZPar (Zhang and Nivre, 2011) is over 250 MB on disk.
the model size of our structured neural parser is only 25 MB.
contrasting
train_3297
Bohnet and Nivre (2012) obtain an accuracy of 93.67%, which is higher than our parser.
their parser is a joint model of parsing and POS-tagging, and they use external data in parsing.
contrasting
train_3298
The input embeddings of our parser are also trained over large raw text, and in this perspective our model is correlated with the semi-supervised models.
because we fine-tune the word embeddings in supervised training, the embeddings of in-vocabulary words become systematically different from these of out-of-vocabulary words after training, and the effect of pre-trained out-ofvocabulary embeddings become uncertain.
contrasting
train_3299
Using the Viterbi algorithm, they can compute the exponential partition function in linear time without approximation.
with a dynamic programming decoder, their sequence labeling model can only extract local features.
contrasting