id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_1200
3 only works if the syntactic parse tree strictly follows the predicate-argument structure of the MR, since meaning composition at each node is assumed to combine a predicate with one of its arguments.
this assumption is not always satisfied, for example, in the case of verb gapping and flexible word order.
contrasting
train_1201
Previous literature in attribute extraction takes advantage of a range of data sources and extraction procedures (Chklovski and Gil, 2005;Tokunaga et al., 2005;Paşca and Van Durme, 2008;Yoshinaga and Torisawa, 2007;Probst et al., 2007;Wu and Weld, 2008).
these methods do not address the task of determining the level of specificity for each attribute.
contrasting
train_1202
Namely, the noun model performs significantly better (p < 0.05) than the adjective model, and the multiplicative model performs significantly better (p < 0.05) than the additive model.
the difference between the multiplicative model and the noun model is not statistically significant in this case.
contrasting
train_1203
That is, the noun model outperformed the adjective model and the multiplicative model outperformed the additive model when using attributespecifying adjectives.
for the objectmodifying adjectives, the noun model no longer outperformed the adjective model.
contrasting
train_1204
A zero-pronoun may or may not have its antecedent in the discourse; in the case it does, we say the zero-pronoun is anaphoric.
a zero-pronoun whose referent does not explicitly appear in the discourse is called a non-anaphoric zero-pronoun.
contrasting
train_1205
By comparing the dynamic cache model using correct zero-anaphora resolution (denoted by DCM (with ZAR) in Figure 4) and the one without it (DCM (w/o ZAR)), we can see that correct zeroanaphora resolution contributes to improving the caching for every cache size.
in the practical setting the current zero-anaphora resolu-tion system sometimes chooses the wrong candidate as an antecedent or does not choose any candidate due to wrong anaphoricity determination, negatively impacting the performance of the cache model.
contrasting
train_1206
For example, the 91.5 F-measure reported by McCallum and Wellner (2004) was produced by a system using perfect information for several linguistic subproblems.
the 71.3 F-measure reported by Yang et al.
contrasting
train_1207
Thus, we would expect a coreference resolution system to depend critically on its Named Entity (NE) extractor.
state-of-the-art NE taggers are already quite good, so improving this component may not provide much additional gain.
contrasting
train_1208
Any genrerelated differences in word usage and/or syntax have just meant a wider variety of words and sentences shaping the covereage of these tools.
ignoring this variety may actually hinder the development of robust language technology for analysing and/or generating multi-sentence text.
contrasting
train_1209
They achieve 81% accuracy in sense disambiguation on this corpus.
graphBank annotations do not differentiate between implicits and explicits, so it is difficult to verify success for implicit relations.
contrasting
train_1210
They found that even with millions of training examples, prediction results using all words were superior to those based on only pairs of non-function words.
since the learning curve is steeper when function words were removed, they hypothesize that using only non-function words will outperform using all words once enough training data is available.
contrasting
train_1211
For example, the pair rose:fell often indicates a Comparison relation when speaking about stocks.
occasionally authors refer to stock prices as "jumping" rather than "rising".
contrasting
train_1212
We show that the features in fact do not capture semantic relation but rather give information about function word co-occurrences.
they are still a useful source of information for discourse relation prediction.
contrasting
train_1213
These two conclusions are consistent with the original intuition.
using any single one does not provide competence in selecting the best set of features.
contrasting
train_1214
(2007) have investigated a model for jointly performing sentence-and document-level sentiment analysis, allowing the relationship between the two tasks to be captured and exploited.
the increased sophistication of supervised polarity classifiers has also resulted in their increased dependence on annotated data.
contrasting
train_1215
At first glance, it may seem plausible to apply an unsupervised clustering algorithm such as kmeans to cluster the reviews according to their polarity.
there is reason to believe that such a clustering approach is doomed to fail: in the absence of annotated data, an unsupervised learner is unable to identify which features are relevant for polarity classification.
contrasting
train_1216
In other words, the ability to determine the relevance of each feature is crucial to the accurate clustering of the ambiguous data points.
in the absence of labeled data, it is not easy to assess feature relevance.
contrasting
train_1217
Speaker attributes such as gender, age, dialect, native language and educational level may be (a) stated overtly in metadata, (b) derivable indirectly from metadata such as a speaker's phone number or userid, or (c) derivable from acoustic properties of the speaker, including pitch and f0 contours (Bocklet et al., 2008).
the goal of this paper is to model and classify such speaker attributes from only the latent information found in textual transcripts.
contrasting
train_1218
We reimplemented this model as our reference for gender classification, further details of which are given below: For each conversation side, a training example was created using unigram and bigram features with tf-idf weighting, as done in standard text classification approaches.
stopwords were retained in the feature set as various sociolinguistic studies have shown that use of some of the stopwords, for instance, pronouns and determiners, are correlated with age and gender.
contrasting
train_1219
The second observation is that, on the restricted subset of word pairs considered, the results obtained by word-to-word translation probabilities are most of the time better than those of concept vector measures.
the differences are not statistically significant.
contrasting
train_1220
Question Answering (QA), which aims to provide answers to human-generated questions automatically, is an important research area in natural language processing (NLP) and much progress has been made on this topic in previous years.
the objective of most state-of-the-art QA systems is to find answers to factual questions, such as "What is the longest river in the United States?"
contrasting
train_1221
Sentences with high scores are then added into the answer set or the summary.
to the best of our knowledge, all previous Markov Random Walk-based sentence ranking models only make use of topic relevance information, i.e.
contrasting
train_1222
While this limits the reliability of syntactic observations, it represents the current state of the art for syntactic analysis of unreconstructed spontaneous speech text.
automatically obtained parses for cleaned reconstructed text are more likely to be accurate given the simplified and more predictable structure of these SUs.
contrasting
train_1223
While OOV is always a problem for most languages in ASR, in the Chinese case the problem can be avoided by utilizing character n-grams and moderate performances can be obtained.
character ngram has its own limitation and proper addition of new words can increase the ASR performance.
contrasting
train_1224
Of course the paths going through a sub-edge e i should be definitely more than the paths through the corresponding full-edge e. As a result, P (e i |A) should usually be greater than P (e|A), as implied by the intuition.
the inter-connectivity between all sub-edges and the proper weights of them are not easy to be handled well.
contrasting
train_1225
Of course the modification of language models led by the addition and deletion of words is hard to quantify and we choose to add and delete as fewer words as possible, which is just a simple heuristic.
adding fewer words means that longer words are added.
contrasting
train_1226
for spoken document indexing fitted well with the proposed LAICA approach.
there still remain lots to be improved.
contrasting
train_1227
The fundamental principle of TBL is to employ a set of rules to correct the output of a stochastic model.
to traditional rule-based approaches where rules are manually developed, TBL rules are automatically learned from training data.
contrasting
train_1228
In particular, for onehour lectures given by different lecturers (such as, for example, invited presentations), it is often impractical to manually transcribe parts of the lecture that would be useful as training or development data.
transcripts for the first 10-15 minutes of a particular lecture can be easily obtained.
contrasting
train_1229
Hierarchical approaches to machine translation have proven increasingly successful in recent years (Chiang, 2005;Marcu et al., 2006;Shen et al., 2008), and often outperform phrase-based systems (Och and Ney, 2004;Koehn et al., 2003) on target-language fluency and adequacy.
their benefits generally come with high computational costs, particularly when chart parsing, such as CKY, is integrated with language models of high orders (Wu, 1996).
contrasting
train_1230
Regarding the small difference in BLEU scores on MT08, we would like to point out that tuning on MT05 and testing on MT08 had a rather adverse effect with respect to translation length: while the two systems are relatively close in terms of BLEU scores (24.83 and 24.91, respectively), the dependency LM provides a much bigger gain when evaluated with BLEU precision (27.73 vs. 28.79), i.e., by ignoring the brevity penalty.
the difference on MT08 is significant in terms of TER.
contrasting
train_1231
In other cases, the selected hypernyms were too generic words, such as entity or attribute, which also fail to preserve the sentence's meaning.
when the unknown term was a very specific word, hypernyms played an important role.
contrasting
train_1232
The results in Table 7 clearly show that the new model is beneficial.
we want to know how much of the improvement gained is due to the IS asymmetries, and how much the syntactic asymmetries on their own can contribute.
contrasting
train_1233
Human usually compress sentences by dropping the intermediate nodes in the dependency tree.
the resulting compressions retain both adequacy and fluency.
contrasting
train_1234
The tree trimming approach guarantees that the compressed sentence is grammatical if the source sentence does not trigger parsing error.
as we mentioned in Section 2, the tree trimming approach is not suitable for Japanese sentence compression because in many cases it cannot reproduce human-produced compressions.
contrasting
train_1235
Some researchers then tried to automatically extract paraphrase rules (Lin and Pantel, 2001; Barzilay and Lee, 2003;Zhao et al., 2008b), which facilitates the rule-based PG methods.
it has been shown that the coverage of the paraphrase patterns is not high enough, especially when the used paraphrase patterns are long or complicated (Quirk et al., 2004).
contrasting
train_1236
In the second stage, a NLG system is employed to generate a sentence t from r. s and t are paraphrases as they are both derived from r. The NLG-based methods simulate human paraphrasing behavior, i.e., understanding a sentence and presenting the meaning in another way.
deep analysis of sentences is a big challenge.
contrasting
train_1237
The basic idea is that a translation should be scored based on their similarity to the human references.
they cannot be adopted in SPG.
contrasting
train_1238
Accordingly, all paraphrases for the source units will be extracted as target units.
when a certain application is given, only the source and target units that can achieve the application will be kept.
contrasting
train_1239
More specifically, recall that LIBSVM trains a classifier that by default employs a CT of 0.5, thus classifying an instance as positive if and only if the probability that it belongs to the positive class is at least 0.5.
this may not be the optimal threshold to use as far as performance is concerned, especially for the minority classes, where the class distribution is skewed.
contrasting
train_1240
There exists some work to remove noise from SMS (Choudhury et al., 2007) (Byun et al., 2007) (Aw et al., 2006) (Kobus et al., 2008).
all of these techniques require aligned corpus of SMS and conventional language for training.
contrasting
train_1241
(2007) use queries as a source of knowledge for extracting prominent attributes for semantic concepts.
there has been much work on extracting structured information from larger text segments, such as addresses (Kushmerick 2001), bibliographic citations (McCallum et al.
contrasting
train_1242
Bilingual data (including bilingual sentences and bilingual terms) are critical resources for building many applications, such as machine translation (Brown, 1993) and cross language information retrieval (Nie et al., 1999).
most existing bilingual data sets are (i) not adequate for their intended uses, (ii) not up-to-date, (iii) apply only to limited domains.
contrasting
train_1243
Note that those bilingual names do not follow the parenthesis pattern.
most of them are identically formatted as: "{Number}。{English name}{Chinese name}{EndOfLine}".
contrasting
train_1244
Moreover, based on the assumption that anchor texts in different languages referring to the same web page are possibly translations of each other, (Lu et al., 2004) propose a novel approach to construct a multilingual lexicon by making use of web anchor texts and their linking structure.
since only famous web pages may have inner links from other pages in multiple languages, the number of translations that can be obtained with this method is limited.
contrasting
train_1245
According to the dictionary, "Little" can be linked with "小", and "River" can be linked with "河".
"Smoky" is translated as "冒烟 的" in the dictionary which does not match any Chinese characters in the Chinese snippet.
contrasting
train_1246
The matching process is actually quite simple, since we transform the learnt patterns into standard regular expressions and then make use of existing regular expression matching tools (e.g., Microsoft .Net Framework) to extract translation pairs.
to make our patterns more robust, when transforming the selected patterns into standard regular expressions, we allow each character class to match more than once.
contrasting
train_1247
For example, "大提琴与小提琴 双 重协 奏曲 Double Concerto for Violin and Cello D 大调第二交响曲 Symphony No.2 in D Major" is segmented into "大提琴与小提琴双重 协奏曲", "Double Concerto for Violin and Cello D", "大调第二交响曲", and "Symphony No.2 in D Major".
the ending letter "D" of the second segment should have been padded into the third segment.
contrasting
train_1248
The finding that longer dialogues were associated with higher user satisfaction disagrees with the results of many previous PARADISE-style evaluation studies.
it does confirm and extend the results of previous studies specifically addressing interactions between users and embodied agents: as in the previous studies, the users in this case seem to view the agent as a social entity with whom they enjoy having a conversation.
contrasting
train_1249
We hypothesize that the measures introduced in this section have larger power in differentiating different simulated user behaviors since every simulated user action contributes to the comparison between different simulations.
the measures introduced in Section 4.1 and Section 4.2 have less differentiating power since they compare at the corpus level.
contrasting
train_1250
Therefore, we suggest that the handcrafted user simulation is not sufficient to be used in evaluating dialog systems because it does not generate user actions that are as similar to human user actions.
the handcrafted user simulation is still better than a user simulation trained with not enough training data.
contrasting
train_1251
No significant difference is observed among the trained and the handcrafted simulations when comparing their generated corpora on corpus-level dialog features as well as when serving as the training corpora for learning dialog strategies.
the simulation trained from all available human user data can predict human user actions more accurately than the handcrafted simulations, which again perform better than the model trained from half of the human user corpus.
contrasting
train_1252
Therefore, our results suggest that if an expert is available for designing a user simulation when not enough user data is collected, it may be better to handcraft the user simulation than training the simulation from the small amount of human user data.
it is another open research question to answer how much data is enough for training a user simulation, which depends on many factors such as the complexity of the user simulation model.
contrasting
train_1253
Participants in dialogue assume that items they present will be added to the common ground unless there is evidence to the contrary.
participants do not always show acceptance of these items explicitly.
contrasting
train_1254
This group had an average WD score of .199, better than the rest of the group at .268.
skill does not appear to increase smoothly as more dialogues are completed.
contrasting
train_1255
In order to formalize this task as a sequential labeling problem, we have assumed that the label of a character is determined by the local information of the character and its previous label.
this assumption is not ideal for modeling abbreviations.
contrasting
train_1256
We will present a detailed discussion comparing DPLVM and CRF+GI for the English abbreviation generation task in the next subsection, where the difference is more significant.
to a larger extent, the results demonstrate that these two alternative approaches are complementary.
contrasting
train_1257
For A IT G and A BIT G , we can efficiently sum over the set of ITG derivations in O(n 6 ) time using the inside-outside algorithm.
for the ITG grammar presented in Section 2.2, each alignment has multiple grammar derivations.
contrasting
train_1258
Exhaustive computation of these quantities requires an O(n 6 ) dynamic program that is prohibitively slow even on small supervised training sets.
most of the search space can safely be pruned using posterior predictions from a simpler alignment models.
contrasting
train_1259
On the one hand this is expected because multiple occurrences of the same word does increase the confusion for word alignment and reduce the link confidence.
additional information (such as the distance of the word pair, the alignment of neighbor words) could indicate higher likelihood for the alignment link.
contrasting
train_1260
This is similar to the "loose phrases" described in (Ayan and Dorr, 2006a), which increased the number of correct phrase translations and improved the translation quality.
removing incorrect content word links produced cleaner phrase translation tables.
contrasting
train_1261
The networks in Figures 1b and 2b are examples.
as a CN is an integration of the skeleton and all hypotheses, it can be conceived as a list of the component translations.
contrasting
train_1262
The comparison between pairwise and incremental TER methods justifies the superiority of the incremental strategy.
the benefit of incremental TER over pair-wise TER is smaller than that mentioned in Rosti et al.
contrasting
train_1263
Note that RANK requires distinct D i , so a rank k RANK rule will first apply (optimally) as soon as the kth-best inside derivation item for a given edge is removed from the queue.
it will also still formally apply (suboptimally) for all derivation items dequeued after the kth.
contrasting
train_1264
Delayed Ranked Inside Derivation Deductions (Lazy Version of KA * ) using many derivations, each inside edge item will be popped exactly once during parsing, with a score and backpointers representing its 1-best derivation.
k-best lists involve suboptimal derivations.
contrasting
train_1265
The table also indicates that our method considerably outperformed two parsers on NP-COOD, ADJP-COOD, and UCP-COOD categories, but it did not work well on VP-COOD, S-COOD, and SBAR-COOD.
the parsers performed quite well in the latter categories.
contrasting
train_1266
As an example, in the case of discontinuous parsing discussed above, we have f = 2 for most practical cases.
lCFRS productions with a relatively large number of nonterminals are usually observed in real data.
contrasting
train_1267
A more efficient algorithm is presented in (Kuhlmann and , working in time O(|p|) in case of f = 2.
this algorithm works for a restricted typology of productions, and does not cover all cases in which some binarization is possible.
contrasting
train_1268
We conjecture that TT-MCTAG does not have such a closure property.
from a first inspection of the MC-TAG analyses proposed for natural languages (see Chen-Main and Joshi (2007) for an overview), it seems that there are no important natural language phenomena that can be described by LCFRS and not by TT-MCTAG.
contrasting
train_1269
This is because each derivation can have a distinct tcounter.
the definition of TT-MCTAG imposes that the head tree of each tuple contains at least one lexical element.
contrasting
train_1270
For example, when working in the financial domain we may be interested in the employment relation, but when moving to the terrorism domain we now may be interested in the ethnic and ideology affiliation relation, and thus have to create training data for the new relation type.
is the old training data really useless?
contrasting
train_1271
A number of relation extraction kernels have been proposed, including dependency tree kernels (Culotta and Sorensen, 2004), shortest dependency path kernels (Bunescu and Mooney, 2005) and more recently convolution tree kernels (Zhang et al., 2006;Qian et al., 2008).
in both feature-based and kernel-based studies, availability of sufficient labeled training data is always assumed.
contrasting
train_1272
Banko and Etzioni (2008) studied open domain relation extraction, for which they manually identified several common relation patterns.
our method obtains common patterns through statistical learning.
contrasting
train_1273
Our idea of concept pair clustering is a two-step clustering process: first it clusters concept pairs into clusters with good precision using dependency patterns; then it improves the coverage of the clusters using surface patterns.
the standard k-means algorithm is affected by the choice of seeds and the number of clusters k. as we claimed in the Introduction section, because we aim to extract relations from Wikipedia articles in an unsupervised manner, cluster number k is unknown and no good centroids can be predicted.
contrasting
train_1274
Since the clusters are obtained without any labeled data, they may not correspond directly to concepts that are useful for decision making in the problem domain.
the supervised learning algorithms can typically identify useful clusters and assign proper weights to them, effectively adapting the clusters to the domain.
contrasting
train_1275
Unreliable fields are highlighted so that the automatically annotated corpus can be corrected.
aL selection of examples together with partial manual labeling of the selected examples are the main foci of our work.
contrasting
train_1276
For SeSAL with t = 0.99, the delay has no particularly beneficial effect.
in combination with lower thresholds, the delay rates show positive effects as SeSAL yields Fscores closer to the maximal F-score of 87.7 %, thus clearly outperforming undelayed SeSAL.
contrasting
train_1277
The most simple but guaranteed way would be to directly perform brute force search for the global optimum over the entire parameter space.
not only the computational cost of this so-called direct search would become undoubtfully expensive as the number of parameters increase, but most retrieval metrics are nonsmooth with respect to model parameters (Metzler, 2007).
contrasting
train_1278
For example, it is well known that the use of a phrase can be effective in retrieval when its constituent words appear very frequently in the collection, because each word would have a very low discriminative power for relevance.
if a constituent word occurs very rarely in the collection, it could not be effective to use the phrase even if the phrase is highly uncompositional.
contrasting
train_1279
In such cases, the knowledge-based model could be more useful, as it can find those query terms in the vocabulary.
the knowledge-based model would have a sparse vocabulary for languages that can have heavily inflected words such as Turkish, and Finnish.
contrasting
train_1280
For example, if the search engine's United States web site, which is considered as one of the most important markets in the world, was to employ such an approach, it'd only receive 74.9% accuracy by misclassifying the English queries entered from countries for which the default language is not English.
when this geographical information is used as a feature in our decision tree framework, we get a very high boost on the accuracy of the results for all the languages.
contrasting
train_1281
For example, in query pair (Toyota Camry, ), 9/13 English pages are anchored by the URLs containing keywords "toyota" and/or "camry", and 3/5 constraint documents' URLs also contain them.
the URLs of returned Chinese pages are less regular in general.
contrasting
train_1282
Interestingly, (political catoons) are among these Chinese queries improved most by English ranking, which is believed as rare (or sensitive) content on Chinese web.
top English queries are short of this type of queries.
contrasting
train_1283
Some connectives are largely unambiguous, such as although and additionally, which are almost always used as discourse connectives and the relations they signal are unambiguously identified as comparison and expansion, respectively.
not all words and phrases that can serve as discourse connectives have these desirable properties.
contrasting
train_1284
Similarly in sentence (2a), once is a discourse connective marking the temporal relation between the clauses "The asbestos fiber, crocidolite is unusually resilient" and "it enters the lungs".
in sentence (2b), once occurs with a non-discourse sense, meaning "formerly" and modifying "used".
contrasting
train_1285
(2004) was adopted in this study.
instead of direct orthographic mapping, we model the mapping between an English segment and the pronunciation in Chinese.
contrasting
train_1286
For the Jlist, similarity between pronunciations accounted for nearly 80% of the errors, and the ratio for the errors that are related to compositions and pronunciations is 1:2.6.
for the Elist, the corresponding ratio is almost 1:1.
contrasting
train_1287
It should be easy to find a lexicon that contains pronunciation information about Chinese characters.
it might not be easy to find visually similar Chinese characters with computational methods.
contrasting
train_1288
With a lexicon, we can find characters that can be pronounced in a particular way.
this is not enough for our goal.
contrasting
train_1289
It is easy to judge whether two Chinese characters have the same tone.
it is not trivial to define "similar" sound.
contrasting
train_1290
This enables By using the adjoining operation, we avoid the problem of infinite local ambiguity.
the adjoining operation cannot preserve lexical dependencies of partial parse trees.
contrasting
train_1291
Under a regular CFG, each parse tree uniquely idenfifies a derivation.
multiple derivations in a TSG can produce the same parse; obtaining the parse probability requires a summation over all derivations that could have produced it.
contrasting
train_1292
Since the oracle score for CCGbank is less than 95%, it would not be a fair comparison to use the complete test set.
there are a number of sentences which are correct, or almost correct, according to EVALB after the conversion, and we are able to use those for a fair comparison.
contrasting
train_1293
We can retrieve positive examples from Web archive with high precision (but low recall) by manually augmenting queries with hypernyms or semantically related words (e.g., "Loft AND shop" or "Loft AND stationary").
it is often costly to create negative examples.
contrasting
train_1294
It mainly refers to a strip of red carpeting laid down for dignitaries to walk on.
it is possible to encounter instances of "red carpet" referring to any carpet of red colour.
contrasting
train_1295
Because the amount of the resources used in our study is quite different, we cannot directly compare the methods and results.
because our analyzer has scalability that can freely add new features, for our future work, we hope to adopt the case frames as new features and compare their effect.
contrasting
train_1296
The ASR accuracy also indicates the user's habituation.
it has been shown that the user's ASR accuracy and barge-in rate do not improve simultaneously (Komatani et al., 2007).
contrasting
train_1297
This algorithm starts by deleting all the clusters which are in QS from CS so that we only focus on the context clusters whose subtopics are present in the answers.
in some cases this assumption is incorrect 1 .
contrasting
train_1298
In this paper, we designed a set of new classes of features to generate better compressions, and they were found to produce statistically significant improvements over the state-of-the-art.
although the user study demonstrates the expected positive impact of grammatical features, an error analysis (Gupta et al., 2009) reveals some limitations to improvements that can be obtained using grammatical features that refer only to the source sentence structure, since the syntax of the source sentence is frequently not preserved in the gold standard compression.
contrasting
train_1299
Query-focus also aids the automated summarizers in directing the summary at specific topics, which may result in better agreement with these model summaries.
while query focus correlates with performance, we show that highperforming automatic systems produce summaries with disproportionally higher query term density than human summarizers do.
contrasting