id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_20200
|
To test this hypothesis fully would require suitable annotation tools and subjects skilled in CCG annotation, which we do not currently have access to.
|
there is some evidence that annotating category sequences can be done very efficiently.
|
contrasting
|
train_20201
|
Since they somehow use lexical information in the tagged corpus, they are called "lexicalized parsers".
|
unlexicalized parsers achieved an almost equivalent accuracy to such lexicalized parsers (Klein and Manning, 2003;.
|
contrasting
|
train_20202
|
In sentence (3) in Figure 3, the baseline method correctly recognized the head of "iin-wa" (commissioner-TM) as "hirakimasu" (open).
|
the proposed method incorrectly judged it as "oujite-imasuga" (offer).
|
contrasting
|
train_20203
|
An early exception to this was (Collins, 1997) itself, where Model 2 used function tags during the training process for heuristics to identify arguments (e.g., the TMP tag on the NP in Figure 1 disqualifies the NP-TMP from being treated as an argument).
|
after this use, the tags are ignored, not included in the models, and absent from the parser output.
|
contrasting
|
train_20204
|
We ported the WordNet similarity path length based measures to the Wikipedia category graph.
|
the category relations in Wikipedia cannot only be interpreted as corresponding to is-a links in a taxonomy since they denote meronymic relations as well.
|
contrasting
|
train_20205
|
In the NIST pilot study, it was apparent that human annotators often disagreed on whether a belief statement was or was not an opinion.
|
high annotator agreement was seen on judg-ment opinions.
|
contrasting
|
train_20206
|
We expanded those manually selected seed words of each sentiment class by collecting synonyms from WordNet.
|
we cannot simply assume that all the synonyms of positive words are positive since most words could have synonym relationships with all three sentiment classes.
|
contrasting
|
train_20207
|
Despite successes in identifying opinion expressions and subjective words/phrases (See Section 1), there has been less achievement on the factors closely related to subjectivity and polarity, such as identifying the opinion holder.
|
our research indicates that without this information, it is difficult, if not impossible, to define 'opinion' accurately enough to obtain reasonable interannotator agreement.
|
contrasting
|
train_20208
|
This means that classification modeling 3 can select many candidates as answers as long as they are marked as true, and does not select any candidate if every one is marked as false.
|
ranking always selects the most probable candidate as an answer, which suits our task better.
|
contrasting
|
train_20209
|
In the example, the complete path from Hhead to Ehead is "<H> NP S VP S S VP VBG <E>".
|
representing each complete path as a single feature produces so many different paths with low frequencies that the ME system would learn poorly.
|
contrasting
|
train_20210
|
This is seen in Figure 2 for English, Dutch, Afrikaans, and Italian.
|
a larger rule set does not mean that the average context width is greater.
|
contrasting
|
train_20211
|
Instead, the data shows little sign of saturation (Figure 4).
|
the average perplexity of the letter-to-phoneme distributions remains level with corpus size ( Figure 5).
|
contrasting
|
train_20212
|
The auxiliary has expects a VP-C, permitting the bare verb phrase demonstrate to be incorrectly used.
|
if we tag-annotate all VP-Cs, rule 6 would be relabeled as VP-C_VB in rule 6 and rule 7 as 7 in Figure 5.
|
contrasting
|
train_20213
|
(2003), and component weights are adjusted by minimum error rate training (Och, 2003).
|
to phrase-based SMT and to the above cited dependency-based SMT approaches, our system feeds dependency-structure snippets into a grammar-based generator, and determines target language ordering by applying n-gram and distortion models after grammar-based generation.
|
contrasting
|
train_20214
|
Alhough generic parameters are useful predictors of user satisfaction in other PARADISE applications, overall our parameters produce less useful user satisfaction models in our system.
|
generic and tutoring-specific parameters do produce useful models of student learning in our system.
|
contrasting
|
train_20215
|
Our results show that, although generic parameters were useful predictors of user satisfaction in other PARADISE applications, overall our parameters produce less useful user satisfaction models in our tutoring system.
|
generic and tutoringspecific parameters do produce useful models of student learning in our system.
|
contrasting
|
train_20216
|
The state with the plusses is the positive final state, and the one at the bottom is the negative final state.
|
we are most concerned with how the non-final states converge.
|
contrasting
|
train_20217
|
To compare the utility of each of the features, we use three metrics: 1 the largest number of differences: 10, followed by Frustration, and then Percent Correctness.
|
counting the number of differences does not completely describe the effect of the feature on the policy.
|
contrasting
|
train_20218
|
This means that Percent Correctness effects a smaller amount of change than this random baseline and thus is fairly useless as a feature to add since the random feature is probably capturing some aspect of the data that is more useful.
|
the Concept Repetition and Frustration cause more change in the policies than the random feature baseline so one can view them as fairly useful still.
|
contrasting
|
train_20219
|
For example, back channels usually have shorter sentences and are constant in discourse pattern over a DA.
|
questions and statements typically have longer, and more complex, discourse structures.
|
contrasting
|
train_20220
|
(2005) introduce additional chunking features to enhance the parse tree features.
|
the hierarchical structured information in the parse trees is not well preserved in their parse treerelated features.
|
contrasting
|
train_20221
|
State of the art extraction algorithms may be able to detect the son and sibling relations from local language clues.
|
the cousin relation is only implied by the text and requires additional knowledge to be extracted.
|
contrasting
|
train_20222
|
Our work restricts induced features to conjunctions of base features, rather than using first-order clauses.
|
the patterns we learn are based on information extracted from natural language.
|
contrasting
|
train_20223
|
In this case, we cluster articles by using their basic patterns as features.
|
each basic pattern is still connected to its entity so that we can extract the name from it.
|
contrasting
|
train_20224
|
When a link is found between two basic clusters that were already assigned to a metacluster, we try to put them into all the existing metaclusters it belongs to.
|
we allow a basic cluster to be added only if it can fill all the columns in that table.
|
contrasting
|
train_20225
|
From these results, we can see the large efficiency benefit of the Markov assumption, as the size of the non-terminal and production sets shrink.
|
the efficiency gains come at a cost, with the Markov order-0 factored grammar resulting in a loss of a full 8 percentage points of F-measure accuracy.
|
contrasting
|
train_20226
|
One way out of this dilemma could be to ignore the detailed morphological structure of the word and focus on determining only the major and minor parts of speech.
|
(Oflazer et al., 1999) observes that the modifier words in Turkish can have dependencies to any one of the inflectional groups of a derived word.
|
contrasting
|
train_20227
|
If this number is large, direct computation of (5) on the corpus might be more efficient.
|
if the corpus at hand is very large, one might opt for direct computation of (3).
|
contrasting
|
train_20228
|
Furthermore, the entries share the same values for the attributes YDS and TD (i.e., 237 and 1).
|
clusters 5 and 6 have no attributes in common.
|
contrasting
|
train_20229
|
This is achieved by limiting the number of entry pairs with positive labels for each document: x e i ,e j ≤ m (5) Notice that the number m is not known in advance.
|
we can estimate this parameter from our development data by considering documents of similar size (as measured by the number of corresponding entry pairs.)
|
contrasting
|
train_20230
|
The task of summarizing spontaneous spoken dialogue from meetings presents many challenges: information is sparse; speech is disfluent and fragmented; automatic speech recognition is imperfect.
|
there are numerous speech-specific characteristics to be explored and taken advantage of.
|
contrasting
|
train_20231
|
Combining the rankings of two such systems might create a third system which is comparable but not any better than either of the first two systems alone.
|
it is still possible that the combined system will be better in terms of balancing the two types of importance discussed above: utterances that contain a lot of informative content and keywords and utterances that relate to decision-making and meeting structure.
|
contrasting
|
train_20232
|
The solution, we believe, is to solicit judgments from multiple assessors and develop a more refined sense of nugget importance.
|
given finite resources, it is important to balance the amount of additional manual effort required with the gains derived from those efforts.
|
contrasting
|
train_20233
|
In Bradshaw's experiment, scientific documents are indexed by the text that refers to them in documents that cite them.
|
unlike in experiments with previous collections, we need both the citing and the cited article as full documents in our collection.
|
contrasting
|
train_20234
|
The majority were criticisms of the test collection paradigm itself and are not pertinent here.
|
the source-document principle (i.e., the use of queries created from documents in the collection) attracted particular criticisms.
|
contrasting
|
train_20235
|
Our average number of judged relevant documents per query is lower than for Cranfield, which had an average of 7.2 (Spärck Jones et al., 2000).
|
this is the final number for the Cranfield collection, arrived at after the second stage of relevance judging, which we have not yet carried out.
|
contrasting
|
train_20236
|
The "global MTF" has been shown to slightly outperform the local version in the aforementioned paper.
|
we believe that the global mode is merely for demonstration but unlikely practical of online judgment since it insists that all queries are judged simultaneously with a strict synchronisation among all assessors.
|
contrasting
|
train_20237
|
We now discuss all these in detail.
|
given a document d, the simplest way to estimate the document language model is to treat the document as a sample from the underlying multinomial word distribution and use the maximum likelihood estimator: P (w|Θ d ) = c (w,d) |d| , where c(w, d) is the count of word w in document d, and |d| is the length of d. as discussed in virtually all the existing work on using language models for retrieval, such an estimate is problematic and inaccurate; indeed, it would assign zero probability to any word not present in document d, causing problems in scoring a document with query likelihood or KLdivergence (Zhai and Lafferty, 2001b).
|
contrasting
|
train_20238
|
In the worst case, our runtime may be exponential in the number of constraints, since we are considering an intractable class of problems.
|
we show that in practice, the method is quite effective at rapid decoding under global hard constraints.
|
contrasting
|
train_20239
|
If not, then we reintroduce the constraints.
|
rather than include all at once, we introduce them only as they are violated by successive solutions to the relaxed problems: y * 0 , y * 1 , etc.
|
contrasting
|
train_20240
|
Recent work on semantic role labeling (SRL) has focused almost exclusively on the analysis of the predicate-argument structure of verbs, largely due to the lack of human-annotated resources for other types of predicates that can serve as training and test data for the semantic role labeling systems.
|
it is wellknown that verbs are not the only type of predicates that can take arguments.
|
contrasting
|
train_20241
|
We speculate that the lack of improvement is due to the fact that the constraint that core (numbered) arguments should not have the same semantic role label for Chinese nominalized predicates is not as rigid as it is for English verbs.
|
further error analysis is needed to substantiate this speculation.
|
contrasting
|
train_20242
|
Indeed, attempts have been made to directly apply machine translation systems to the problem of semantic parsing (Papineni et al., 1997;Macherey et al., 2001).
|
these systems make no use of the MRL grammar, thus allocating probability mass to MR translations that are not even syntactically well-formed.
|
contrasting
|
train_20243
|
Human efforts are preferred if the evaluation task is easily conducted and managed, and does not need to be performed repeatedly.
|
when resources are limited, automated evaluation methods become more desirable.
|
contrasting
|
train_20244
|
We chose English-Chinese MT parallel data because they are news-oriented which coincides with the task genre from DUC.
|
it is unknown how large a parallel corpus is sufficient in providing a paraphrase collection good enough to help the evaluation process.
|
contrasting
|
train_20245
|
While we experimented with several parameter settings for LSA and Brown methods, we do not claim that the selected settings are necessarily optimal.
|
these methods present sensible com- parison points for understanding the relationship between paraphrase quality and its impact on automatic evaluation.
|
contrasting
|
train_20246
|
The algorithm here does some of that with the steps that normalize the strings.
|
the largest boost in performance is with CEQ, which expands the number of allowable cross-language matches for many characters.
|
contrasting
|
train_20247
|
Our ¡ -step random walk approach is similar to the one proposed by Harel and Koren (2001).
|
their algorithm is proposed for "spatial data" where the nodes of the graph are connected by undirected links that are determined by a (symmetric) similarity function.
|
contrasting
|
train_20248
|
The highest result (shown boldface) for each algorithm and corpus was achieved by using generation vectors.
|
unlike in the k-means experiments, ¡ ¢ £ ¤ ¥ ¢ was able to outperform I Q and I R in one or two cases.
|
contrasting
|
train_20249
|
The recognition approach taken for Turkish involves a static decoding network construction and optimization resulting in near real time decoding.
|
the memory requirements of network optimization becomes prohibitive for large lexicon and language models as presented in this paper.
|
contrasting
|
train_20250
|
Measures using WordNet taxonomy are state-ofthe-art in capturing semantic similarity, attaining r=.85 -.89 correlations with the MC dataset (Jiang and Conrath, 1997;Budanitsky and Hirst, 2006).
|
they fall short of measuring relatedness, as, operating within a single-POS taxonomy, they cannot meaningfully compare kill to death.
|
contrasting
|
train_20251
|
In a previous study (Chatain et al., 2006), linguistic model (LiM) adaptation using different types of word models has proved useful in order to improve summary quality.
|
sparsity of the data available for adaptation makes it difficult to obtain reliable estimates of word n-gram probabilities.
|
contrasting
|
train_20252
|
With smaller labeled dataset, the gap between LP and SVM is larger.
|
lP JS divergence consistently outperforms lP Cosine .
|
contrasting
|
train_20253
|
Over 712 questions, it replaced 14, two of which improved performance, the rest stayed the same.
|
random selection of paraphrases decreased performance to 0.156, clearly showing the importance of selecting a good paraphrase.
|
contrasting
|
train_20254
|
We believe that in face-to-face discourse, it is important to consider the possibility that non-verbal communication may offer features that are critical to language understanding.
|
due to the long-standing emphasis on text datasets, there has been relatively little work on non-textual features in unconstrained natural language (prosody being the most notable exception).
|
contrasting
|
train_20255
|
Instead, it can be described as a series of patterns at various levels of regularity.
|
compositionality is not a necessary assumption: finite-state models are well-suited for representing mappings from strings of meaning elements to strings of form elements without necessarily pairing them one-to-one.
|
contrasting
|
train_20256
|
A drawback of most finite-state models is their inability to generalize to novel items the way a human could.
|
the output of our finite-state model could potentially be used to generate training sets for connectionist or statistical models.
|
contrasting
|
train_20257
|
For example, a decision tree or SVM that builds a 3-way superclassifier using the posterior probabilities from the HMM and Maxent.
|
so far we have not found any gain from more complicated system combination than a simple linear interpolation.
|
contrasting
|
train_20258
|
al., 2003), which rely upon acoustic/prosodic cues.
|
none of these efforts allow for the context-dependence of extractive summarization, such that the inclusion of one word or sentence in a summary depends upon prior selection decisions.
|
contrasting
|
train_20259
|
A significant exception is the work of Conroy and O'Leary (2001), which employs an HMM model with pivoted QR decomposition for text summarization.
|
the structure of their model is constrained by identifying a fixed number of 'lead' sentences to be extracted for a summary.
|
contrasting
|
train_20260
|
One way to manipulate an extractor's precisionrecall tradeoff is to assign a confidence score to each extracted entity and then apply a global threshold to confidence level.
|
confidence thresholding of this sort cannot increase recall.
|
contrasting
|
train_20261
|
1 Accordingly, it is not possible to draw a straightforward quantitative comparison between our PropBank SSN parser and other PropBank parsers.
|
state-of-theart semantic role labelling systems (CoNLL, 2005) use parse trees output by state-of-the-art parsers (Collins, 1999;Charniak, 2000), both for training and testing, and return partial trees annotated with semantic role labels.
|
contrasting
|
train_20262
|
We are now building the environment for AAC users with cooperation with ISAAC-ISRAEL 2 , in order to make the system fully accessible and to be tested by AAC-users.
|
this work is still in progress.
|
contrasting
|
train_20263
|
1 While videos contain a rich source of audiovisual information, text-based video search is still among the most effective and widely used approaches.
|
the quality of such text-based video search engines still lags behind the quality of those that search textual information like web pages.
|
contrasting
|
train_20264
|
In both corpora, we find positive priming effects.
|
pp priming is stronger, and Cp priming is much stronger in Map Task.
|
contrasting
|
train_20265
|
In the English and Mandarin systems, the lexical and acoustic feature sets perform similarly, and combine to yield improved results.
|
on the Arabic data, the acoustic feature set performs quite poorly, suggesting that the use of vocal cues to topic transitions may be fundamentally different in Arabic.
|
contrasting
|
train_20266
|
Weights for different types of constituents from each parser can be set in a similar way to configuration 3 in the dependency experiments.
|
instead of measuring accuracy for each part-of-speech tag of dependents, we measure precision for each non-terminal label.
|
contrasting
|
train_20267
|
The choice of using maximum likelihood estimation for estimating the intermediate language models for W (j) is motivated by the simplification in the entropy calculation which reduces the order from O(V ) to O(k).
|
maximum likelihood estimation of language models is poor when compared to smoothing based estimation.
|
contrasting
|
train_20268
|
Recently, the Web has been used as a corpus in the NLP community, where mainly counts of hit pages have been exploited (Kilgarriff and Grefenstette, 2003).
|
our proposal, Web-Based Language Modeling (Sarikaya, 2005), and Bootstrapping Large Sense-Tagged corpora (Mihalcea, 2002) use the content within the hit pages.
|
contrasting
|
train_20269
|
Our proposal is similar to previous studies in that both use machine learning.
|
previous methods used expensive resources, e.g., a corpus in which words are manually tagged according to their pronunciation.
|
contrasting
|
train_20270
|
The live system assigns non-default categories with 86.5% precision; a revised algorithm achieved 93.0% precision, both based on an evaluation of 982 topics.
|
our precision on identifying unambiguous topics with DMOZ was only 83%.
|
contrasting
|
train_20271
|
Note that we use the same notation to denote a matrix and its normalized matrix.
|
the affinity weight between two sentences in the affinity graph is currently computed simply based on their own content similarity and ignore the affinity diffusion process on the graph.
|
contrasting
|
train_20272
|
Since the dictionary-based approach is a well-known method, we skip its technical descriptions.
|
keep in mind that the dictionary-based approach can produce a higher R-iv rate.
|
contrasting
|
train_20273
|
This is a simple method and the quantization resolution can be adjusted based on the amount of data available for training.
|
the model does not perform as well when combined with the syntactic features.
|
contrasting
|
train_20274
|
Often, unwritten rules based on factors like social roles, personal assertiveness, and the current locus of control play a part in determining who will give away."
|
haller and Fossum did not further investigate how conversants efficiently resolve conflicts of dialogue initiative.
|
contrasting
|
train_20275
|
The urgency level of 10 seconds requires conversants to start the interruption game very quickly in order to complete it in time.
|
the urgency level of 40 seconds allows conversants ample time to wait for the best time to start the game (Heeman et al., 2005).
|
contrasting
|
train_20276
|
Moreover, if the initiator had an advantage, we would expect the system to have fought more strongly in the user-initiated segments in order to win.
|
we do not see that the relative volume of the system winning in user-initiated segments is statistically higher than in system-initiated segments in this small sample size (p = 0.9, ttest).
|
contrasting
|
train_20277
|
Automatically extracting these argument models is a challenging task.
|
researchers have begun to make progress towards this goal.
|
contrasting
|
train_20278
|
Usually, about 300 or more sentences are used to automatically rank MT systems (Koehn, 2004).
|
the quality of a sentence translated by an MT system is difficult to evaluate.
|
contrasting
|
train_20279
|
This is because conventional methods are based on the similarity between a translated sentence and its reference translation, and they give the translated sentence a high score when the two sentences are globally similar to each other in terms of lexical overlap.
|
in the case of the above example, the most important thing to maintain a high translation quality is to correctly translate "for" into the target language, and it would be difficult to detect the importance just by comparing an MT result and its reference translations even if the number of reference translations is increased.
|
contrasting
|
train_20280
|
These results indicate that we can reduce the development cost for constructing sub-goals.
|
there are still significant gaps between the correlation coefficients obtained using a fully automatic method and upper bounds.
|
contrasting
|
train_20281
|
Most stateof-the-art SMT systems treat grammatical elements in exactly the same way as content words, and rely on general-purpose phrasal translations and target language models to generate these elements (e.g., Och and Ney, 2002;Koehn et al., 2003;Quirk et al., 2005;Chiang, 2005;Galley et al., 2006).
|
since these grammatical elements in the target language often correspond to long-range dependencies and/or do not have any words corresponding in the source, they may be difficult to model, and the output of an SMT system is often ungrammatical.
|
contrasting
|
train_20282
|
The SMT system, trained on this domain, produces a natural lexical translation for the English word patch as correction program, and translates replace into passive voice, which is more appropriate in Japanese.
|
1 there is a problem in the case marker assignment: the accusative marker wo, which was output by the SMT system, is completely inappropriate when the main verb is passive.
|
contrasting
|
train_20283
|
These results show that the strategy of only including the new information as features in a standard n-best re-ranking scenario does not lead to an improvement over the baseline.
|
method 2 obtains notable improvements over the baseline.
|
contrasting
|
train_20284
|
While recent phrase-based statistical machine translation (SMT) systems achieve significant improvement over the original source-channel statistical translation models, they 1) use a large inventory of blocks which have significant overlap and 2) limit the use of training to just a few parameters (on the order of ten).
|
we show that our proposed minimalist system (DTM2) achieves equal or better performance by 1) recasting the translation problem in the traditional statistical modeling approach using blocks with no overlap and 2) relying on training most system parameters (on the order of millions or larger).
|
contrasting
|
train_20285
|
In the above text, the "she" in the last sentence is coreferent with both mentions of "Elizabeth".
|
when we consider "she" and "Elizabeth (1) " in isolation from the remaining coreference chain, it can be difficult for a machine learning method to determine whether the pair is coreferent or not.
|
contrasting
|
train_20286
|
A simple approach is to perform the transitive closure of the pairwise decisions.
|
as shown in recent work (McCallum and Wellner, 2003;Singla and Domingos, 2005), better performance can be obtained by performing relational inference to directly consider the dependence among a set of predictions.
|
contrasting
|
train_20287
|
Also, we attribute the gains from error-driven training to the fact that training examples are generated based on errors made on the training data.
|
(we should note that there are also small differences in the feature sets used for error-driven and standard training results.)
|
contrasting
|
train_20288
|
For such queries, typical field matching would retrieve no documents at all.
|
the SRM approach achieves a mean average precision of over twenty percent.
|
contrasting
|
train_20289
|
This task is not a typical IR task because the fielded structure of the query is a critical aspect of the processing, not one that is largely ignored in favor of pure content based retrieval.
|
the approach used is different from most DB work because cross-field dependencies are a key component of the technique.
|
contrasting
|
train_20290
|
It is easy to verify that the probability will be non-zero only if some training record w actually contained these words in their respective fields -an unlikely event.
|
the probability of 'elementary' and 'differential' co-occurring in the same title might be considerably higher.
|
contrasting
|
train_20291
|
Items at group centers have higher probabilities, and tighter groups have overall higher probabilities.
|
the stationary distribution does not address diversity at all.
|
contrasting
|
train_20292
|
In contrast, GRASSHOPPER does not involve clustering.
|
it is still able to automatically take advantage of cluster structures in the data.
|
contrasting
|
train_20293
|
This involves inverting an (n − |G|) × (n − |G|) matrix, which is expensive.
|
the Q matrix is reduced by one row and one column in every iteration, but is otherwise unchanged.
|
contrasting
|
train_20294
|
A node's prominence comes from its intrinsic stature, as well as the prominence of the nodes it touches.
|
to ensure that the topranked nodes are representative of the larger graph structure, it is important to make sure the results are not dominated by a small group of highly prominent nodes who are closely linked to one another.
|
contrasting
|
train_20295
|
We seek an actor ranking such that the top actors are prominent.
|
we also want the top actors to be diverse, so they represent comedians from around the world.
|
contrasting
|
train_20296
|
The initial high coverage comes from the random selection of actors.
|
these randomly selected actors are often not prominent, as we show next.
|
contrasting
|
train_20297
|
They show that a 2-parameter Markov process gives rise to a stationary distribution that exhibits the word frequency distribution and the letter frequency distribution characteristics of natural language.
|
the Markov process is initialized such that any state has exactly two successor states, which means that after each word, only two other following words are possible.
|
contrasting
|
train_20298
|
The deviation in single letter words can be attributed to the writing system being a transcription of phonemes and few phonemes being expressed with only one letter.
|
the slight quantitative differences do not oppose the similar distribution of word lengths in both samples, which is reflected in a curve of similar shape in figure 6 and fits well the gamma distribution variant of (Sigurd et al., 2004).
|
contrasting
|
train_20299
|
(2005) explored a large set of features that are potentially useful for relation extraction.
|
the feature space was defined and explored in a somewhat ad hoc manner.
|
contrasting
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.