id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_14300
For our applications, this means that the vocabulary tests for vocabulary prediction must always be computerized.
noninteractive algorithms allow us to have vocabulary tests printed in the form of handouts, so we focus on non-interactive algorithms throughout this paper.
contrasting
train_14301
In the usual label propagation setting, the "test" nodes (data) are prepared separately from the training nodes to determine how accurately the algorithm can classify forthcoming or unseen nodes.
in our setting, there were no such forthcoming words.
contrasting
train_14302
Another alternative method is Information Gain (Yang and Pedersen, 1997).
as defined in equation 2, it measures the entropy gain associated with feature t in assigning the class label c. these methods are limited: they do not provide ranked lists per-L1 class, and more importantly, they do not explicitly capture underuse.
contrasting
train_14303
The verbs in LVCs are assumed to be flexible for inflection (Baldwin and Kim, 2010).
we know little about how fineteness contributes to the formation of LVCs.
contrasting
train_14304
That is, previous studies focused only on text printed on paper.
with the increasing use of hand-held devices, people in these days use various reading devices such as a tablet and a smart phone as well as a paper.
contrasting
train_14305
According to their results, the average number of verb phrases in a sentence, the number of words in an article, the likelihood of the vocabulary, and the likelihood of the discourse relations are highly correlated with human ratings.
these studies did not consider the reading devices, but focused on how well a text is written.
contrasting
train_14306
The large number of noun phrases in a text requires a reader to remember more items (Barzilay and Lapata, 2008;Pitler and Nenkova, 2008).
it also makes the text more interesting.
contrasting
train_14307
In other words, English abbreviation generation is based on words in the full form.
in Chinese, word is not the most suitable abbreviating unit.
contrasting
train_14308
VerbNet and WordNet) are used to generalize structured event features in order to reduce their sparseness.
the problem of accurately predicting stock price movement using structured events is challenging, since events and the stock market can have complex relations, which can be influenced by hidden factors.
contrasting
train_14309
Intuitively, three or more hidden layers may achieve better performance.
three hidden layers mean that we construct a five-layer deep neural network, which is difficult to train (Bengio et al., 1994).
contrasting
train_14310
These studies primarily use bags-of-words to represent financial news documents.
as Schumaker and Chen (2009) and Xie et al.
contrasting
train_14311
Imposing minimum node and edge frequencies in the co-occurrence graph was also tested.
applying no thresholds provided the highest average coherence.
contrasting
train_14312
Whereas many solutions in NLP (including topic models) require document segmentation, lexical normalization and statistical normalizations on the co-occurrence matrix itself, the only variable in our method is the cooccurrence window size.
lemmatization (or stemming) could help collapse morphosyntactic variation among terms in the results, but stop-word removal, sentence segmentation and TF-IDF weighting appear unnecessary.
contrasting
train_14313
One way to measure the importance of a community would be to use significance testing on the internal link mass compared to the external (Csardi and Nepusz, 2006).
this approach discards some factors for which one might want to account, such as centrality in the network of communities and their composition.
contrasting
train_14314
Recently, Wan and Xiao (2008) proposed a model that incorporates a local neighborhood of a document.
their neighborhood is limited to textually-similar documents, where the cosine similarity between the tf-idf vectors of documents is used to compute their similarity.
contrasting
train_14315
Nguyen and Kan (2007) extended KEA to include features such as the distribution of keyphrases among different sections of a research paper, and the acronym status of a term.
to these works, we propose novel features extracted from the local neighborhoods of documents available in interlinked document networks.
contrasting
train_14316
(2009) extended KEA as well to integrate information from Wikipedia.
we used only information intrinsic to our data.
contrasting
train_14317
(2010), where a set of important keyphrases is extracted first from the citation contexts in which the paper to be summarized is cited by other papers and then the "best" subset of sentences that contain such keyphrases is returned as the summary.
keyphrases in (Qazvinian et al., 2010) are extracted using frequent n-grams in a language model framework, whereas in our work, we propose a supervised approach to a different task: keyphrase extraction.
contrasting
train_14318
Hence, similar to (Hulth, 2003;Mihalcea and Tarau, 2004;Liu et al., 2009), we did not use the entire text of a paper.
extracting keyphrases from sections such as "introduction" or "conclusion" needs further attention.
contrasting
train_14319
For this reason, we used the contexts provided by CiteSeer x directly.
in future, it would be interesting to incorporate in our models more sophisticated approaches to identifying the text that is relevant to a target citation (Abu-Jbara and Radev, 2012;Teufel, 1999) and study the influence of context lengths on the quality of extracted keyphrase.
contrasting
train_14320
For example, it is difficult to distinguish true semantic similarity (e.g., "cows" -"cattle") from mere associational relatedness (e.g., "cows" -"milk") based on cooccurrence statistics.
coreference chains should be able to make that distinction since only "cows" and "cattle" can occur in the same coreference chain, not "cows" and "milk".
contrasting
train_14321
The pattern-based approach (Lin et al., 2003;Turney, 2008) discussed above also needs few resources.
to our work, it relies on patterns and might therefore restrict the number of recognizable synonyms and antonyms to those appearing in the context of the pre-defined patterns.
contrasting
train_14322
That is, if many of the character n-grams of a fragment are infrequent in the document, it would be probably a plagiarized fragment.
if many of them are frequent, then the fragment is likely to be original.
contrasting
train_14323
Note that, what we explained above is solely how to compute the class of each n-gram of a document.
our purpose is to represent the document fragments using these classes.
contrasting
train_14324
Several recent papers on Arabic dialect identification have hinted that using a word unigram model is sufficient and effective for the task.
most previous work was done on a standard fairly homogeneous dataset of dialectal user comments.
contrasting
train_14325
We used the trained model to segment the training and test sets.
• Morphological Rules: to Morfessor, we developed only 15 morphological rules (based on the analysis proposed in Section 3) to segment ARZ text.
contrasting
train_14326
Combining S mrph and S rule features with the S lex feature led to further improvement.
as shown in Table 2, using the S lex feature alone with the MAN and VERB lists led to the best results (94.6%), outperforming using all other features either alone or in combination.
contrasting
train_14327
Finally, the enhanced query model (that is P(w|H) in speech recognition) can be estimated by RM, SMM, RSMM or QMM, and further combined with the background n-gram (e.g., trigram) language model to form an adaptive language model to guide the speech recognition process.
extractive speech summarization aims at producing a concise summary by selecting salient sentences or paragraphs from the original spoken document according to a predefined target summarization ratio (Carbonell and Goldstein, 1998;Mani and Maybury, 1999;Nenkova and McKeown, 2011;Liu and Hakkani-Tur, 2011).
contrasting
train_14328
Another finding was that the candidates' power correlates with the distribution of topics they speak about in the debates: candidates with more power spoke significantly more about certain topics (e.g., economy) and less about certain other topics (e.g., energy).
these findings relate to the specific election cycle that was analyzed and will not carry over to political debates in general.
contrasting
train_14329
We also investigated the utility of topic shifting features described in (Prabhakaran et al., 2014) extracted using LDA based topic modeling.
they did not improve the performance of the ranker, and hence we do not discuss them in detail in this paper.
contrasting
train_14330
Thus the PLRE improvement is small for order = 2, but more substantial for order = 3.
for the Small-Russian dataset, the vocabulary size is much larger and consequently the bigram counts are sparser.
contrasting
train_14331
An alternate technique is to use word-classing (Goodman, 2001;Mikolov et al., 2011), which can reduce the cost of exact normalization to O( √ V ).
our approach is much more scalable, since it is trivially parallelized in training and does not require explicit normalization during evaluation.
contrasting
train_14332
This places the trigger from the question as the source of the query path (see both queries in the bottom right portion of the running example).
had the verb been require, the trigger would be the target of the query.
contrasting
train_14333
Instead it just sets them to zero at the beginning and uses the pivot slice to re-calculate them.
our method of BPTF is well suited to symmetric relations with many unknown relatedness entries.
contrasting
train_14334
The vocabulary of the antonym entries in the thesaurus is limited, and does not contain many words in the antonym questions.
distributional similarities can be trained from large corpora and hence have a large coverage for words.
contrasting
train_14335
Our model usually takes less than 30 minutes to meet the convergence criteria (on a machine with an Intel Xeon E3-1230V2 @ 3.3GHz CPU ).
the MRLSA requires about 3 hours for tensor decomposition (Chang et al., 2013).
contrasting
train_14336
(6) would exhibit the exchange symmetry if not for the log(X i ) on the right-hand side.
this term is independent of k so it can be absorbed into a bias b i for w i .
contrasting
train_14337
At first glance this might seem like a substantial improvement over the shallow windowbased approaches, which scale with the corpus size, |C|.
typical vocabularies have hundreds of thousands of words, so that |V | 2 can be in the hundreds of billions, which is actually much larger than most corpora.
contrasting
train_14338
The context of a word is often defined as the words appearing in a window of fixed-length (bag-of-words) and a simple approach is to treat the co-occurrence statistics of a word w as a vector representation for w (Mitchell and Lapata, 2008;Mitchell and Lapata, 2010); alternatively, dependencies between words can be used to define contexts (Goyal et al., 2013;Erk and Padó, 2008;Thater et al., 2010).
to distributional representations, NNLMs represent words in a low-dimensional vector space (Bengio et al., 2003;Collobert et al., 2011).
contrasting
train_14339
6This is equivalent to a softmax regression model.
when the vocabulary V is large, computing the softmax function in Eq.
contrasting
train_14340
The scores ρ = 0.51 for the NN task and ρ = 0.48 for the VO task are the best results to date.
the score ρ = 0.34 for the SVO task did not improve by increasing the dimensionality.
contrasting
train_14341
The traditional approach is based on the assumption that every mention of an entity pair (e.g., Obama and USA) participates in the known relation between the two (i.e., born in).
this introduces noise, as not every mention expresses the relation we are assigning to it.
contrasting
train_14342
Other than using the existing known facts to label the text corpora in a distant supervision setting (Bunescu and Mooney, 2007;Mintz et al., 2009;Riedel et al., 2010;Ritter et al., 2013), an existing knowledge base is typically not involved in the process of relation extraction.
this paradigm has started to shift recently, as researchers showed that by taking existing facts of a knowledge base as an integral part of relation extraction, the model can leverage richer information and thus yields better performance.
contrasting
train_14343
(2011) made the observation that if λ = 0, the matrix inversion can be calculated by Then, it only involves an inversion of an r × r matrix, namely A T A.
if λ > 0, directly calculating Eq.
contrasting
train_14344
Using this approach, the task of relation extraction can easily be scaled to hundreds of different relationships.
distant supervision leads to a challenging multiple instance, multiple label learning problem.
contrasting
train_14345
Thus, the vector of an entity may encode global information from the entire graph, and hence scoring a candidate fact by designed vector operations plays a similar role to long range "reasoning" in the graph.
since this requires the vectors of both entities to score a candidate fact, this type of methods can only complete missing facts for which both entities exist in the knowledge graph.
contrasting
train_14346
Moreover, the number of matched Wikipedia anchors (∼40M) is relatively small compared to the total number of word pairs (∼2.0B in Wikipedia) and hence the contribution is limited.
the advantage is that the quality of the data is very high and there are no ambiguity/completeness issues.
contrasting
train_14347
Observing the scores assigned to true triplets by TransE, we notice that triplets of popular relations generally have larger scores than those of rare relations.
pTransE, as a probabilistic model, assigns comparable scores to true triplets of both popular and rare relations.
contrasting
train_14348
Most existing works on sentiment summarization focus on predicting the overall rating on an entity (Pang et al., 2002;Pang and Lee, 2004) or estimating ratings for product features (Lu et al., 2009;Lerman et al., 2009;Snyder and Barzilay, 2007;Titov and McDonald, 2008)).
the opinion summaries in such systems are extractive, meaning that they generate a summary by concatenating extracts that are representative of opinion on the entity or its aspects.
contrasting
train_14349
Starlet-H uses extractive summarization techniques to select salient quotes from the input reviews and embeds them into the abstractive summary to exemplify, justify or provide evidence for the aggregate positive or negative opinions.
starlet-H assumes a limited number of aspects as input and needs a large amount of training data to learn the ordering of aspects for summary generation.
contrasting
train_14350
Highlighting the reasons behind opinions in reviews was also previously proposed in (Kim et al., 2013).
their approach is extractive and similar to (Ganesan et al., 2010) does not cover the distribution of opinions.
contrasting
train_14351
As an alternative we could have selected connectives based on the discourse relations specified in the aspects tree.
this is left as future work.
contrasting
train_14352
For aspect phrase 1, it seems that the sentiment distribution is consistent with that of the left aspect.
we can not say that the phrase belongs to the aspect because the distribution may be the same for two different aspects.
contrasting
train_14353
For each supervised model, we provide a proportion of manually labeled data for training, which is randomly selected from goldstandard annotations.
we didn't use any labeled data for our approach.
contrasting
train_14354
For example, multi-document summarization of news articles aims at synthesizing contents of similar news and removing the redundant information contained by the different news articles.
each scientific paper has much specific content to state its own work and contribution.
contrasting
train_14355
There are several NLP systems whose accuracy depends crucially on finding misspellings fast.
the classical approach is based on a quadratic time algorithm with 80% coverage.
contrasting
train_14356
Any accurate systems, such as the ones developed for cross document coreference, text similarity, semantic search or digital humanities, should be able to handle the misspellings in corpora.
the issue is not easy and the required processing time, memory or the dependence on external resources grow fast with the size of the analyzed corpus; consequently, most of the existing algorithms are inefficient.
contrasting
train_14357
For this reason the ED algorithm is the most common way to detect and correct the misspellings.
there is a major inconvenience associated with the use of ED, namely, ED runs in quadratic time considering the length of the strings, O(n 2 ).
contrasting
train_14358
The CDC system computes the probability of coreference for two mentions t and t' using a similarity metrics into a vectorial space, where vectors are made out of contextual features occurring with t and t' respectively (Grishman, 1994).
the information extracted from documents is often too sparse to decide on coreference (Popescu, 2009).
contrasting
train_14359
Thus, missing a coreference pair may result in losing the possibility of realizing further coreferences.
for two mentions matching a misspelling pattern which is highly accurate, the threshold for contextual evidence is lowered.
contrasting
train_14360
Nevertheless, noisy WA makes both analyzing WS and improving SMT quality quite hard.
by using manual WA, we can clearly analyze the segmentation problems (Section 2), and train supervised models to solve the problem (Section 3).
contrasting
train_14361
The direct translation model trained with the standard bilingual corpus exceeds in translation performance, but its weakness lies in low phrase coverage.
the pivot model has characteristics characters.
contrasting
train_14362
Synthetic Method: It aims to create a synthetic source-target corpus by: (1) translate the pivot part in source-pivot corpus into target language with a pivot-target model; (2) translate the pivot part in pivot-target corpus into source language with a pivot-source model; (3) combine the source sentences with translated target sentences or/and combine the target sentences with translated source sentences (Utiyama et al., 2008;Wu and Wang, 2009).
it is difficult to build a high quality translation system with a corpus created by a machine translation system.
contrasting
train_14363
The ability to make context-sensitive translation decisions is one of the major strengths of phrasebased SMT (PSMT).
the way PSMT exploits source-language context has several limitations as pointed out, for instance, by Quirk and Menezes (2006) and Durrani et al.
contrasting
train_14364
Arabic-English results did not reveal statistically significant differences between the two distortion limits for Pos→Pos→Pos•Pos.
for Lex•Lex BLEU decreases when using a distortion limit of 10 compared to a limit of 5.
contrasting
train_14365
RI, thus, significantly enhances the computational complexity of deriving a VSM from text.
the application of the RI technique (likewise the standard truncated SVD in LSA) is limited to 2 normed spaces, i.e.
contrasting
train_14366
In order to employ the RMI method for the construction of a VSM at reduced dimension and the estimation of the 1 distance between vectors, two model parameters should be decided: (a) the targeted (reduced) dimensionality of the VSM, which is indicated by m in Equation 8 and (b) the number of non-zero elements in index vectors, which is determined by s in Equation 6.
to the classic one-dimension-per-context-element methods of VSM construction, 10 the value of m in RPs and thus in RMI is chosen independently of the number of context elements in the model (n in Equation 8).
contrasting
train_14367
These vectors must be stored and accessed efficiently when using the RMI technique.
resources that are required for the storage and processing of floating numbers is high.
contrasting
train_14368
The bottom-right plot shows the cluster of phrases that are semantically similar (countries or regions).
the top-right plot shows the phrases that are syntactically similar.
contrasting
train_14369
It is common practice to directly use the type value of each variable as an index and maintain a set of sites for each type.
maintaining a (r 1 , r 2 , r 3 ) triple for each node in the chosen tree is too memory heavy in our appli-cation.
contrasting
train_14370
As each tree fragment sampled from the forest represents a unique translation rule, we do not need to explicitly extract the rules; we merely need to collect them and count them.
the fragments sampled include purely non-lexical rules that do not conform to the rule constraints of Hiero, and rules that are not useful for translation.
contrasting
train_14371
Posing phrase movements as a prediction problem using contextual features modeled by maximum entropy-based classifier is superior to the commonly used lexicalized reordering model.
training this discriminative model using large-scale parallel corpus might be computationally expensive.
contrasting
train_14372
State-of-the-art statistical machine translation systems use large amounts of parallel data to estimate translation models.
parallel corpora are expensive and not available for every domain.
contrasting
train_14373
It is interesting to compare these weights to the perplexities of the correct decipherment measured using different n-gram orders (Table 5).
at this point we do not see any obvious connection between perplexities and weights w n , and leave this as a further research direction.
contrasting
train_14374
At the same time, the word @ •Ï Ç(thrombocytopenia) appears frequently in the disease domain and can be treated as a domain-specific term.
for such a simple sentence, current segmentation tools perform poorly.
contrasting
train_14375
For example, for the MWE china clay, the definition is kaolin, which includes neither of the components.
we find the component word clay in the definition for kaolin, as shown below.
contrasting
train_14376
For annotator noise modelling, we assume that a "ground truth" exists and that annotations are some noisy deviations from this truth.
for some settings these assumptions do not necessarily hold and often tasks can be anti-correlated.
contrasting
train_14377
Setting a larger rank allows more flexibility in modelling task correlations.
a higher number of hyperparameters may lead to overfitting problems or otherwise cause issues in optimisation due to additional non-convexities in the log likelihood objective.
contrasting
train_14378
It is thus outperformed by our method.
we note the differences in experimental setting mean this comparison is perhaps not completely fair (different training sets).
contrasting
train_14379
We can obtain a gold DEP-DT by transforming a gold Rhetorical Structure Theory-based discourse tree (RST-DT).
there is still a large difference between the ROUGE scores of a system with a gold DEP-DT and a system with a DEP-DT obtained from an automatically parsed RST-DT.
contrasting
train_14380
The above formulation of the word similarity model can be interpreted as a mixture model in which w is similar to w if any of the context probabilities agrees.
to guard against false positives, we can alternatively reformulate it as a product of experts (Hinton, 1999), where Z(w) is a normalization constant.
contrasting
train_14381
Models 2-5 and the HMM-based model introduce additional components in order to capture word ordering and word fertility.
they have p(f | e) in common.
contrasting
train_14382
A more semantic approach resorts to training word alignments on semantic word classes (Ma et al., 2011).
the resulting alignments are only used to supplement the word alignments learned on lexical words.
contrasting
train_14383
After having generated for each word their vector representation, we use them as features for the annotated data to classify event roles.
event role fillers are not generally single words but noun phrases that can be, in some cases, identified as named entities.
contrasting
train_14384
Systems such as ReVerb, PATTY, OLLIE, and Exemplar have attracted much attention on English ORE.
few studies have been reported on ORE for languages beyond English.
contrasting
train_14385
Many NLP and IR applications, including selectional preference learning, commonsense knowledge and entailment rule mining, have benefited from ORE (Ritter et al., 2010).
most existing ORE systems focus on English, and little research has been reported on other languages.
contrasting
train_14386
We can see that, just like the case of using other feature alone, using the score of random walk alone is far from enough.
the first 5 candidates contain most of the correct answers.
contrasting
train_14387
However, it performs poorly without the character features (Column 2).
without the character features, our method (Column 4) works much better than the sequence labeling method.
contrasting
train_14388
Distant supervision has become the leading method for training large-scale relation extractors, with nearly universal adoption in recent TAC knowledge-base population competitions.
there are still many questions about the best way to learn such extractors.
contrasting
train_14389
In addition, they also used human annotated linguistic information as "latent" features in their work, which are similar to our implicit linguistic features.
the "latent" features that they used in their system are humanannotated, while the eventuality type and modality features in our system are predicted automatically.
contrasting
train_14390
For example, they excluded verbal expressions in Chinese that are translated into nominal phrases in English.
we kept all events in our data, and they can be realized as verbs, nouns, as well as words in other parts of speech.
contrasting
train_14391
There are two categories that are even less than 5%.
even though we only use some simple features, our model still beats the most frequent label baseline (around 35% accuracy) by a big margin, as shown in Table 4.
contrasting
train_14392
In fact, on the datasets with all events, joint learning with modality produces the highest accuracy among all approaches.
joint learning with eventuality is even worse than the baseline.
contrasting
train_14393
In the Chinese sentence, MaxEnt b classifies "讨 论(tao3lun4)" as "Future" because there is no grammatical indicator in the Chinese sentence implying that the "discussion" has already happened and it is reasonable to assume the "discussion" is in the near future.
with eventuality type "Episodic" and modality label "Actual", MaxEnt em classifies it as "Past" correctly, because episodic events tend to occur in the past and future events tend to get "Intended" or "Hypothetical" modality labels.
contrasting
train_14394
Populating Knowledge Base (KB) with new knowledge facts from reliable text resources usually consists of linking name mentions to KB entities and identifying relationship between entity pairs.
the task often suffers from errors propagating from upstream entity linkers to downstream relation extractors.
contrasting
train_14395
Information is often augmented or conveyed by non-textual features such as positioning, font size, color, and images.
traditionally, NER approaches rely exclusively on textual features and as a result could perform poorly in visually rich genres such as online marketing flyers or infographics.
contrasting
train_14396
To improve accuracy, CoTS combined this frequency signal with manually supplied constraints such as the functionality of the US presidency relation to scope the beginning and end of Nixon presidency.
the proposed system does not require constraints as input.
contrasting
train_14397
Prior work, CoTS, inferred end times by leveraging manually specified constraints, e.g., that there can only be one vice president at a time: the beginning of one signals the end of another (Talukdar 2012b).
such methods do not scale due to the amount of constraints that must be hand-specified.
contrasting
train_14398
Our ILP formulation takes around 8.5hrs.
mImLRE algorithm (Em-based) takes around 23hrs to converge.
contrasting
train_14399
This is because, hoffmann-ilp predicts significantly more nil labels.
nil labels are not part of the label-set in the P/R curves reported in the community.
contrasting