id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20600
10 The sentences were chosen randomly, so each one is potentially from a different domain.
bNC can be thought of as its own domain in that it contains significant lexical differences from the American English used in our other corpora.
contrasting
train_20601
Prosodic information such as pause length, duration of words and phones, pitch contours, energy contours, and their normalized values have been used in speech processing tasks like sentence boundary detection .
other researchers use linguistic encoding schemes like ToBI (Silverman et al., 1992), which encodes tones, the degree of juncture between words, and prominence symbolically.
contrasting
train_20602
If the sub-tree is constructed from a binary rule rewrite X → Y Z, then the root nonterminal Y of some best sub-tree over some span (i, k) will generate break b k because Y is the highest nonterminal that covers word w k as the right-most terminal 4 .
the root nonterminal Z of some best subtree over some span (k+1, j) can not generate break b j because Z has a higher nonterminal X that also covers word w j as its right-most terminal.
contrasting
train_20603
The DIRECT bars represent direct parsing results for models trained and evaluated on the original data, ORA-CLE bars for models trained and evaluated on the modified oracle data (see Subsection 6.2), and the ORACLE-RESCORE and DIRECTRESCORE bars for results of the two rescoring approaches (described in Subsection 6.3) on the original evaluation data.
the prosodically enriched methods do not significantly improve upon the REGULAR baseline after the introduction of latent annotations.
contrasting
train_20604
Early question-answering (QA) systems, such as Baseball (Green et al., 1961) and Lunar (Woods, 1973) were carefully hand-crafted to answer questions in a limited domain, similar to the QA components of ELIZA (Weizenbaum, 1966) and SHRDLU (Winograd, 1972).
there has been a resurgence of QA systems following the TREC conferences with an emphasis on answering factoid questions.
contrasting
train_20605
A user querying a restaurant recommendation system expects more fine-grained information such as house specials, wine selections and choices on desserts rather than just general 'good food.'
to a 'text' summarization system, the textual space in a dialogue turn is often very limited.
contrasting
train_20606
Extraction of structured information from text is of interest to a large number of communities.
in the ads domain, the task has usually been simplified to that of classification or ranking.
contrasting
train_20607
That way a noun can appear in different vectors, hence in different clusters during hierarchical clustering as a result of its polysemy.
the underlying assumption is that a verb is monosemous with respect to its associated vector of nouns.
contrasting
train_20608
in the case of singer or need) as well as its ability to include synonyms from less frequent senses (e.g., the experiment sense of research or the verify sense of prove).
there are a number of ways it could be improved: Feature representations: Multiple prototypes improve Spearman correlation on WordSim-353 compared to previous methods using the same underlying representation (Agirre et al., 2009).
contrasting
train_20609
However, there are a number of ways it could be improved: Feature representations: Multiple prototypes improve Spearman correlation on WordSim-353 compared to previous methods using the same underlying representation (Agirre et al., 2009).
we have not yet evaluated its performance when using more powerful feature representations such those based on Latent or Explicit Semantic Analysis (Deerwester et al., 1990;Gabrilovich and Markovitch, 2007).
contrasting
train_20610
We posit that the finer-grained senses actually capture useful aspects of word meaning, leading to better correlation with WordSim-353.
it would be good to compare prototypes learned from supervised sense inventories to prototypes produced by automatic clustering.
contrasting
train_20611
In particular, they infer syntactic correspondences between the source and target languages through word alignment patterns, sometimes in combination with constraints from parser outputs.
word alignments are not perfect indicators of syntactic alignment, and syntactic systems are very sensitive to word alignment behavior.
contrasting
train_20612
In that case, the original article of the writer can be used as a feature for the classifier and the correct article, as judged by a native English speaker, will be viewed as the label.
obtaining annotated data for training is expensive and, since the native training data do not contain errors, we cannot use the writer's article as a feature for the classifier.
contrasting
train_20613
Moreover, training on error-tagged data is currently unrealistic in the majority of error correction scenarios, which suggests that using text with artificial mistakes is the only alternative to using clean data.
it has not been shown whether training on data with artificial errors is beneficial when compared to utilizing clean data.
contrasting
train_20614
Like blogs, conversations on Twitter occur in a public environment, where they can be collected for research purposes.
twitter posts are restricted to be no longer than 140 characters, which keeps interactions chat-like.
contrasting
train_20615
Note that τ is not a perfect measure of model quality for conversations; in some cases, multiple order- ings of the same set of posts may form a perfectly acceptable conversation.
there are often strong constraints on the type of response we might expect to follow a particular dialogue act; for example, answers follow questions.
contrasting
train_20616
In the recent years, analysis of social media has attracted a lot of attention from the research community.
most of the work that uses social media focuses on blogs (Glance et al., 2004;Bansal and Koudas, 2007;Gruhl et al., 2005).
contrasting
train_20617
Different rows of Table 2 correspond to the following ways of ranking the threads: • Entropy + users -if the entropy of a thread is < 3.5, move to the back of the list, otherwise sort according to number of unique users Results show that ranking according to size of thread performs better than the baseline, and ranking according to the number of users is slightly better.
a sign test showed that neither of the two ranking strategies is significantly better than the baseline.
contrasting
train_20618
The baseline in this case is very strong and neither sorting according to the size of the thread nor according to the number of users outperforms the baseline.
adding the information about entropy significantly (p ≤ 0.01) improves the performance over all other ranking methods.
contrasting
train_20619
It was also discussed how this is related to finding decision boundaries through lowdensity regions of input distribution.
the obvious assumption here is that the classes are well separated and there in fact exists low-density regions between classes which can be treated as boundaries.
contrasting
train_20620
Previous research reported OOV detection accuracy on all test data.
once an OOV word has been observed in the training data for the OOV detector, even if it never appeared in the LVCSR training data, it is no longer truly OOV.
contrasting
train_20621
As a classification algorithm, Maximum Entropy assigns a label to each region independently.
ooV words tend to be recognized as two or more IV words, hence ooV regions tend to co-occur.
contrasting
train_20622
(Villavicencio et al., 2005) identify Machine Translation as an application of particular interest since " recognition of MWEs is necessary for systems to preserve the meaning and produce appropriate translations and avoid the generation of unnatural or nonsensical sentences in the target language."
statistical machine translation (SMT) typically does not model MWEs explicitly.
contrasting
train_20623
Meza-Ruiz and Riedel (2009) has shown that the predicate sense can improve the final SRL performance.
there is few discussion about the concrete influence of all word senses, i.e.
contrasting
train_20624
The facts of Sahaptin word order are also too complex for the customization system; in particular, it cannot model truly free word order (i.e., discontinuous noun phrases), and the attachment behavior of the second-position enclitic is similarly beyond its capability.
given these simplifying assumptions, the customization system is capable of modeling all the agreement and marking patterns of Sahaptin intransitive and transitive clauses shown in Tables 7 and 8 in R&R (1996, 676).
contrasting
train_20625
3 (1) Monolingual parsing is commonly thought of as a worst-case O(n 3 ) algorithm, even the known algorithms do have a grammar term that can contribute significantly.
since the grammar that a parser will employ is generally assumed to be fixed, Figure 2 plots the average runtime of the algorithm as a function of the Arabic sentence length on an Arabic-English phrasal ITG alignment task.
contrasting
train_20626
With the arrival of efficient, widecoverage parsers, it is feasible to create very large databases of trees.
existing approaches that use in-memory search, or relational or XML database technologies, do not scale up.
contrasting
train_20627
The length experiment here use a single term repeated multiple times.
there is a possibility that the results may vary when the terms are different, because it would involve additional time to load the term vectors of distinct elements into memory.
contrasting
train_20628
Unfortunately, it is known by previous results (Rambow and Satta, 1999) that it is not always possible to convert an LCFRS into such a binary form without increasing the fan-out.
we will show that it is always possible to build such a binarization for well-nested LCFRS.
contrasting
train_20629
In the Hallway Testing, the users were better trained and more familiar with IE tools (including the graphical interface of cross-document IE); and thus they can benefit more from the IE techniques.
in the Remote Evaluation, the users had quite diverse knowledge backgrounds.
contrasting
train_20630
(2009) who detect idiom types by using statistical methods that model the general idiomaticity of an expression and then combine this with a simple second-stage process that detects whether the target expression is used figuratively in a given context, based on whether the expression occurs in canonical form or not.
modeling token-based detection as a combination of type-based extraction and tokenbased classification has some drawbacks.
contrasting
train_20631
The answers constructed interactively were submitted to NIST as the final (post-interaction) run.
since these answers were significantly shorter than the initial run (given the short interaction period), the responses were "padded" by running additional iterations of automatic MMR until a length quota of 4000 characters had been achieved.
contrasting
train_20632
2), if sentence A contains all the information in sentence B but not vice versa, then B is also their intersection while A is their union and no sentence generation is required.
if the two sentences are too dissimilar, then no intersection is possible and the union is just the conjunction of the sentences.
contrasting
train_20633
Since topic changes sometimes happen within a single document, and our end task is sentence retrieval, we also investigate the notion of word cooccurrence in a smaller segment of text such as a sentence.
to the document-wise model, sentence-wise co-occurrence does not consider whole documents, and only concerns itself with the number of times that two words occur in the same sentence.
contrasting
train_20634
When a corpus is sense-tagged, mapping occurrences of a word to a concept is straightforward (since each sense of a word corresponds with a concept or synset in WordNet).
if the text has not been sense-tagged then all of the possible senses of a given word are incremented (as are their ancestors).
contrasting
train_20635
The Resnik (res) measure simply uses the Information Content of the LCS as the similarity value: The Resnik measure is considered somewhat coarse, since many different pairs of concepts may share the same LCS.
it is less likely to suffer from zero counts (and resulting undefined values) since in general the LCS of two concepts will not be a very specific concept (i.e., a leaf node in WordNet), but will instead be a somewhat more general concept that is more likely to have observed counts associated with it.
contrasting
train_20636
The one exception is with respect to the WS gold standard, where vector and lesk perform much better than the Information Content measures.
this seems reasonable since they are relatedness measures, and the WS corpus is annotated for relatedness rather than similarity.
contrasting
train_20637
To the machine learning community, treebanks are just collections of data, like pixels with captions, structural and behavioral facts about genes, or observations about wild boar populations.
to us computational linguists, treebanks are not naturally occurring data at all: they are the result of a very complex annotation process.
contrasting
train_20638
Linguistic phrase structure is most conveniently expressed in a phrase structure tree, while linguistic dependency is most conveniently expressed in a dependency tree.
we can express the same content in either type of tree!
contrasting
train_20639
Crowdsourcing is the use of the mass collaboration of Internet passers-by for large enterprises on the World Wide Web such as Wikipedia and survey companies.
a generalized way to monetize the many small tasks that make up a larger task is relatively new.
contrasting
train_20640
The performance of IdentiFinder is quite low on the IT business press.
a simple rule-based system was able to gain 10% improvement in precision with little recall sacrificed.
contrasting
train_20641
Hierarchical phrase-based translation (Hiero, (Chiang, 2005)) provides an attractive framework within which both short-and longdistance reorderings can be addressed consistently and ef ciently.
hiero is generally implemented with a constraint preventing the creation of rules with adjacent nonterminals, because such rules introduce computational and modeling challenges.
contrasting
train_20642
On the minus side, SWSD can only be useful for subjectivityambiguous words.
we showed (Su and Markert, 2008) that subjectivity-ambiguity is frequent (around 30% of common words).
contrasting
train_20643
They then incorporate these probabilities into machine translation.
they do not consider sentiment explicitly.
contrasting
train_20644
We tackle cross-lingual lexical substitution as a supervised task, using sets of manual translations for a target word as training data even for baseline system B.
we do not necessarily need dedicated human translated data as we could also use existing parallel texts in which the target word occurs.
contrasting
train_20645
That is, we posit that clickthrough information could provide important evidence for classifying query ambiguity.
we find that previously proposed clickthrough-based measures tend to conflate informational and ambiguous queries.
contrasting
train_20646
For instance, query "ako" was annotated as ambiguous, as it could refer to different popular websites, such as the site for Army Knowledge Online and the company site for A.K.O., Inc.
most users select the result for the Army Knowledge Online site, making the overall entropy low, resulting in prediction as a clear query.
contrasting
train_20647
Simplification can also be considered to be a form of MT in which the two "languages" in question are highly related.
note that ComplexEW and SimpleEW do not together constitute a clean parallel corpus, but rather an extremely noisy comparable corpus.
contrasting
train_20648
There are at least four possible edit operations: .
for this initial work we assume P (o 4 ) = 0.
contrasting
train_20649
In the field of machine translation, automatic metrics have proven quite valuable in system development for tracking progress and measuring the impact of incremental changes.
human judgment still plays a large role in the context of evaluating MT systems.
contrasting
train_20650
They simply choose the NPs in a product review as the product attribute candidates (Hu and Liu, 2004;Popescu and Etzioni, 2005;Yi et al., 2003).
this method limits the recall of the product attribute extraction for two reasons.
contrasting
train_20651
In particular, for a syntactic structure 3 T in the test set, if T exactly matches with one of the standard syntactic structures, then its corresponding string can be treated as a product attribute candidate.
this method fails to handle similar syntactic structures, such as the two structures in Figure 2.
contrasting
train_20652
This can illustrate that syntactic structures can cover more forms of the product attributes.
the recall of SynStru based method is not high, either.
contrasting
train_20653
At the moment, because of the small size of the data sets and the variety of writing styles in the development set, only tentative conclusions can be drawn.
even this small data set reveals clear problems for WSJ-trained parsers: the handling of long coordinated sentences (particularly in the presence of erratic punctuation usage), domain-specific fixed expressions and unknown words.
contrasting
train_20654
We also learn that new entity nominals are typically indefinite or have SBAR complements (captured by the CFG feature).
to nominals and pronouns, the choice of entity for a proper mention is governed more by entity frequency than antecedent distance.
contrasting
train_20655
On average the unconstrained model that contains more sentence pairs for rule extraction slightly outperforms the bounded condition which uses less data per epoch.
the static baseline and the bounded models both use the same number of sentence-pairs for TM training.
contrasting
train_20656
For presentation clarity we show only a sample of the full set of ten test points though all results follow the pattern that using more aligned sentences to derive our grammar set resulted in slightly better performance versus a restricted training set.
for the same coverage constraints not only do we achieve comparable performance to batch retrained models using the sOEM method of incremental adaptation, we are able to align and adopt new data from the input stream orders of magnitude quicker since we only align the mini-batch of sentences collected from the last epoch.
contrasting
train_20657
Clearly this question is not easy to address.
to get a rough idea we can look at the examples reported in Table 1.
contrasting
train_20658
The CN decoding has been shown to be efficient, just minimally larger than the single string decoding (Bertoldi et al., 2008).
in the current enhanced MT setting, the sequence of Steps 1 to 4 for building the CN from the noisy input text is quite costly.
contrasting
train_20659
Demographically related languages: Hindi and Kannada are languages spoken in the Indian subcontinent, though they are from different language families.
due to the shared culture and demographics, it is easy to create parallel names data between these two languages.
contrasting
train_20660
Task-accuracy results have generally been favorable.
it can be timeconsuming to apply Bayesian inference methods to each new problem.
contrasting
train_20661
Traditional machine learning algorithms are typically designed for a single machine, and designing an efficient training mechanism for analogous algorithms on a computing clusteroften via a map-reduce framework (Dean and Ghemawat, 2004) -is an active area of research (Chu et al., 2007).
unlike many batch learning algorithms that can easily be distributed through the gradient calculation, a distributed training analog for the perceptron is less clear cut.
contrasting
train_20662
This counter example does not say that a parameter mixing strategy will not converge.
if T is separable, then each of its subsets is separable and converge via Theorem 1.
contrasting
train_20663
For example, it seems reasonable to assume that "brooklyn pizza" and "pizza brooklyn" denote roughly the same user intent.
the pair has an edit distance of two (delete-insert), while the distance between "brooklyn pizza" and the less relevant "brooklyn college" is only one (substitute).
contrasting
train_20664
Some researchers combine features by manual rules or weights.
it is not convenient to directly use these rules or weights in another data set.
contrasting
train_20665
Machine Transliteration has been studied extensively in the context of Machine Translation and Cross-Language Information Retrieval (Knight and Graehl, 1998), (Virga and Khudanpur, 2003), (Kuo et al., 2006), (Sherif and Kondrak, 2007), (Ravi and Knight, 2009), (Li et al., 2009), (Khapra and Bhattacharyya, 2009).
machine Transliteration followed by string similarity search gives less-thansatisfactory solution for the cross-language name search problem as we will see later in Section 4.
contrasting
train_20666
Intuitively, there should be a relatively small prior probability of introducing a new word-object pair, corresponding to a small α 1 value.
most other words don't refer to the topic object (or any other object for that matter), corresponding to a much larger α 0 value.
contrasting
train_20667
A small concentration parameter biases the estimator to prefer a small set of word types.
the relatively large concentration parameter for the nonreferring words tends to result in most of the words receiving highest probability as non-referring words.
contrasting
train_20668
A segmenting model without meanings cannot share the word learner's reluctance to propose new meaning-bearing word types and might propose three separate types for "your book", "a book", and "the book".
with a small enough prior on new referring word types, the word learner that discovers a common referent for all three sequences and, preferring fewer referring word types, is more likely to discover the common subsequence "book".
contrasting
train_20669
It is expected that a label predicated on requests for action should rely on the isolation of verb stems, but this is still a very substantial gain.
to this 391.2% gain in accuracy for Chichewa, the gain for Indep = language independent, Chich = specific to Chichewa, ( ) = not significant (ρ > 0.05, χ 2 ), Final = Gain of the 'Morph-Config, Indep' model over the Baseline.
contrasting
train_20670
Unfortunately, word-based techniques face problems due to data sparsity: not all words in the test set are seen during training.
consonant-based approaches rarely face the analogous problem of previously unseen consonants.
contrasting
train_20671
(2006), we use a hybrid word-and consonant-level approach based on the following observations (statistics taken from the Syriac training and development sets explained in Section 4): Contrary to observations 1 and 2, consonant-level approaches dedicate modeling capacity to an exponential (in the number of consonants) number of possible diacritizations of a word.
a word-level approach directly models the (few) diacritized forms seen in training.
contrasting
train_20672
In sentences containing no rare words, the well-known Viterbi algorithm can be used to find the optimum.
as can be seen in Figure 1b, predictions in the consonant-level model (e.g., C 5,1...4 ) depend on previously diacritized words (D 4 ), and some diacritized words (e.g., D 6 ) depend on diacritics in the previous rare word (C 5,1...4 ).
contrasting
train_20673
Thus, the concept of space is only learnt later on when the person learns how to use a computer.
space is introduced as a tool to control the correct letter shaping and not to consistently separate words.
contrasting
train_20674
As the examples suggest, the reduplication may not only be limited to word initial position and may also occur word medially.
if the length of base word is less than four, it is further to avoid function words (case markers, postpositions, aux-iliaries, etc.)
contrasting
train_20675
As has been discussed, space does not necessarily indicate word boundary.
presence of space does imply word or morpheme boundary in many 6 The word ٰ ‫اعلی‬ is written with the super-script Alef placed on Lam and Yay characters.
contrasting
train_20676
Since the alignment in the HMM-based model is determined by a hidden variable, the EM algorithm is required to estimate the parameters of the model (see (Och and Ney, 2003)).
the standard EM algorithm is not appropriate to incrementally extend our HMM-based models because it is designed to work in batch training scenarios.
contrasting
train_20677
This prevents models from doing better or worse just because they received different starting points.
it is still possible that certain random starting points are better for some evaluation metrics than others.
contrasting
train_20678
Phrases were extracted using the grow heuristic (Koehn et al., 2003).
we threw away all phrases that have a P (e|f ) < 0.0001 in order to reduce the size of the phrase table.
contrasting
train_20679
This maybe due to the monotone na-ture of the reference translations and the fact that having multiple references reduces the need for reorderings.
it is possible that differences between training to WER and TER would become more apparent using models that allow for longer distance reorderings or that do a better job of capturing what reorderings are acceptable.
contrasting
train_20680
The Arabic results also trend toward suggesting that BLEU:4 is better than either standard METEOR and METEOR α 0.5.
for the Chinese models, training to standard METEOR and METEOR α 0.5 is about as good as training to BLEU:4.
contrasting
train_20681
Edit distance models tend to do poorly when evaluated on other metrics, as do models trained using METEOR.
training models to METEOR can be made more robust by setting α to 0.5, which balances the importance the metric assigns to precision and recall.
contrasting
train_20682
3 Our algorithm requires running the IO algorithm for each yield in the variational distribution, for each nonterminal, and for each sentence.
iO runs with much smaller grammars coming from the grammatons.
contrasting
train_20683
At first glance it seems that variational inference is slower than MCMC sampling.
note that the cost of the grammar preprocessing step is amortized over all experiments with the specific grammar, and the E-step with variational inference can be parallelized, while sampling requires an update of a global set of parameters after each tree update.
contrasting
train_20684
For example, the sentence-based sampler samples all the variables associated with a sentence at once (e.g., the entire tag sequence).
this blocking does not deal with the strong type-based coupling (e.g., all instances of a word should be tagged similarly).
contrasting
train_20685
Figure 3 shows examples of same-type sites for our three models.
even if all sites in S have the same type, we still cannot sample b S jointly, since changing one b s might change the type of another site s ; indeed, this dependence is reflected in (5), which shows that types depend on z.
contrasting
train_20686
Skip Approximation Large type blocks mean larger moves.
such a block S is also sampled more frequently-once for every choice of a pivot site s 0 ∈ S. we found that empirically, b S changes very infrequently.
contrasting
train_20687
All of these methods maintain distributions over (or settings of) the latent variables of the model and update the representation iteratively (see Gao and Johnson (2008) for an overview in the context of POS induction).
these methods are at the core all token-based, since they only update variables in a single example at a time.
contrasting
train_20688
If the paraphrase P passes the current test, in the next iteration it will be tested by taking one more context word into account, namely W 2 3 .
if the paraphrase P fails the current (n, C) check the checking procedure will terminate and report that the paraphrase fails.
contrasting
train_20689
However, If the paraphrase P fails the current (n, C) check the checking procedure will terminate and report that the paraphrase fails.
if the paraphrase passes all the (n, C) checks where C = 1 to maxC, the procedure determines the paraphrase as acceptable.
contrasting
train_20690
If an information unit contains at least one paraphrasable sentence, this information unit implies the embedding of 1.
if none of the sentences in the information unit are paraphrasable, it implies the embedding of 0.
contrasting
train_20691
For example, if {a,b} occurs eight times in the test corpus as <a,b> and two times as <b,a>, we will be limited to a maximum accuracy of 80% (presuming our system correctly predicts the more common ordering).
even though suggesting <b,a> is not strictly incorrect, we generally prefer to reward a system that produces more common orderings, an attribute not emphasized by type-based metrics.
contrasting
train_20692
We predict pairwise orderings with 88% accuracy, so we would expect no worse than (.88) 3 , or 68% accuracy on such sequences.
the pairwise accuracy declines on longer NPs, so it underperforms even that theoretical minimum.
contrasting
train_20693
We take a rule-based approach in order to leverage this linguistic knowledge.
since many phenomena pertaining to question generation are not so easily encoded with rules, we include statistical ranking as an integral component.
contrasting
train_20694
This would allow, for example, the addition of a rule to generate why questions that builds off of the existing rules for subject-auxiliary inversion, verb decomposition, etc.
previous QG approaches have employed separate rules for specific sentence types (e.g., Mitkov and Ha, 2003;Gates, 2008).
contrasting
train_20695
In Yarowsky's work, his system requires an initial, manually-supplied collocation as a feature for each sense of a keyword.
we can use GLOSSY's extracted glosses to supply starter features fully automatically, using only an unlabeled corpus.
contrasting
train_20696
If we could predict subject and complements of the word well, supertagging would be an easier job to do.
current widely used sequence labeling models have the limited ability to catch these longdistance syntactic relations.
contrasting
train_20697
Since the baseline parser is different, we didn't make a direct comparison here.
it would be interesting to compare these two different ways of incorporating the dependency parser into HPSG parsing.
contrasting
train_20698
The only approach that outperforms the LAS score of the unweighted voting model is the model that weighs parsers by their accuracy for a given modifier POS tag, but the improvement is marginal.
the number of base parsers in the ensemble pool is crucial: performance generally continues to improve as more base parsers are considered.
contrasting
train_20699
Unlike voting, a metaclassifier can combine evidence from multiple contexts (such as the ones listed in Table 2).
in our experiments such a meta-classifier 4 did not offer any gains over the much simpler unweighted voting strategy.
contrasting