id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_20500
For non-sparse models, there is significant variation in average discounts, but because 1 D F i=1 |λ i | is low, the overall error is low.
sparse models are dominated by singlecount n-grams with features whose average discount is quite close to γ = 0.938.
contrasting
train_20501
These techniques include the hold-out method; leave-one-out and k-fold cross-validation; and bootstrapping (Allen, 1974;Stone, 1974;Geisser, 1975;Craven and Wahba, 1979;Efron, 1983).
unlike non-datasplitting methods, these methods do not lend themselves well to providing insight into model design as discussed in Section 6.
contrasting
train_20502
In ML, we maximize the probability of a given sequence of observations O, belonging to a given class, given the HMM λ of the class, with respect to the parameters of the model λ.
this probability is the total likelihood of the observations and can be expressed mathematically as: there is no known way to analytically solve for the model λ = {A, B, π} , which maximize the quantity L tot , where A is the transition probabilities, B is the observation probabilities, and π is the initial state distribution.
contrasting
train_20503
The basic idea of the proposed modeling is to create a separate model for each word of the language and use the language model corpus to estimate the parameters of the model.
one could argue that the basic model could be improved by taking the contexts of the morphemes into account.
contrasting
train_20504
For a vocabulary size of V , the number of tri-morpheme could be as high as V 3 .
most of the tri-morphemes are either rare or will not be observed in the training data altogether.
contrasting
train_20505
Most of the probabilities lie along the diagonal line.
some trigram probabilities are modulated making TMLM probabilities quite different than the corresponding trigram probabilities.
contrasting
train_20506
This improvement is not statistically significant.
interpolating TMLM with Word-3gr improves the WER to 31.9%, which is 1.0% better than that of the Word-3gr.
contrasting
train_20507
As the distribution this model describes does not change, neither will its training performance.
the (unscaled) size F i=1 |λ i | of the model has been reduced from 3+4=7 to 0+1+3=4, and consequently by eq.
contrasting
train_20508
This algorithm has working storage that it can use to store parts of the input or other intermediate computations.
(and this is a critical constraint), this working storage space is significantly smaller than the input stream length.
contrasting
train_20509
This number is not significantly worse than the 36% of stream counts obtained from 4, 018k true counts for the smallest value of ǫ = 5e-8.
if we look at the other two metrics, the ranking correlation ρ of stream counts compared with true counts on ǫ = 50e-8 and 20e-8 is low compared to other ǫ values.
contrasting
train_20510
Performance starts to degrade as we get to 2, 000k (over 50% of all 5-grams), a result that is not too surprising.
even here we note that the MSE is low, suggesting that the frequencies of stream counts (found in top K true counts) are very close to the true counts.
contrasting
train_20511
As expected, the recall with respect to true counts is maximum for unigrams, bigrams, trigrams and 5-grams.
the amount of space required to store all true counts in comparison to stream counts is extremely high: we need 4.8GB of compressed space to store all the true counts for 5-grams.
contrasting
train_20512
For 5-grams, the best recall value is .020 (1.2k out of 60k 5-gram stream counts are found in the test set).
compared with the true counts we only loss a recall of 0.05 (4.3k out of 60k) points but memory savings of 150 times.
contrasting
train_20513
(2007) reported that translation quality continued to improve with increasing corpus size for training language models at even size of 2 trillion tokens, the increase became small at the corpus size of larger than 30 billion tokens.
for more complex NLP tasks, such as case structure analysis and zero anaphora resolution, it is necessary to obtain more structured knowledge, such as semantic case frames, which describe the cases each predicate has and the types of nouns that can fill a case slot.
contrasting
train_20514
For examples, Kawahara and Kurohashi proposed a method for constructing wide-coverage case frames from large corpora (Kawahara and Kurohashi, 2006b), and a model for syntactic and case structure analysis of Japanese that based upon case frames (Kawahara and Kurohashi, 2006a).
they did not demonstrate whether the coverage of case frames was wide enough for these tasks and how dependent the performance of the model was on the corpus size for case frame construction.
contrasting
train_20515
(Nakov and Hearst, 2005;Gledson and Keane, 2008)).
search engines are not designed for NLP research and the reported hit counts are subject to uncontrolled variations and approximations.
contrasting
train_20516
In English, overt pronouns such as she and definite noun phrases such as the company are anaphors that refer to preceding entities (antecedents).
in Japanese, anaphors are often omitted; these omissions are called zero pronouns.
contrasting
train_20517
P (n j |CF l , s j , A (s j ) = 1) is similar to P (n j |CF l , s j , A(s j ) = 1) and estimated from the frequencies of case slot examples in case frames.
while A (s j ) = 1 means s j is not filled with an overt argument but filled with an antecedent of zero pronoun, case frames are constructed from overt predicate argument pairs.
contrasting
train_20518
For noun with copula, the coverage was only 54.5%.
most predicate argument relations concerning nouns with copula were easily recognized from syntactic preference, and thus the low coverage would not quite affect the performance of discourse analysis.
contrasting
train_20519
We considered that generalized examples can benefit for the accuracy of syntactic analysis, and tried several models that utilize these examples.
we cannot confirm any improvement.
contrasting
train_20520
In the worst case, the fan-out of p 2 can be as large as ϕ(B r−1 ) + ϕ(B r ).
1: Function NAIVE-BINARIZATION(p) 2: result ← ∅; 3: currentProd ← p; 4: while ρ(currentProd) > 2 do 5: currentProd ← p 1 ; 8: return result ∪ currentProd; We have defined reductions only for the last two occurrences of nonterminals in the right-hand side of a production p. it is easy to see that we can also define the concept for two arbitrary (not necessarily adjacent) occurrences of nonterminals, at the cost of making the notation more complicated.
contrasting
train_20521
Note that we have expressed the algorithm as a decision function that will return true if there exists a binarization of p with fan-out not greater than f , and false otherwise.
the algorithm can easily be modified to return a reduction producing such a binarization, by adding to each endpoint set ∆ ∈ workingSet two pointers to the adjacent endpoint sets that were used to obtain it.
contrasting
train_20522
This representation has size certainly smaller than 2f × q, where q is the size of the set workingSet.
both membership and insertion operations take now an amount of time O(2f ).
contrasting
train_20523
This will result into a more efficiently parsable LCFRS, since rank exponentially affects parsing complexity.
we must take into account that parsing complexity is also influenced by fan-out.
contrasting
train_20524
Even in the restricted case of f = ϕ(p), that is, when no increase in the fan-out of the input production is allowed, we do not know whether p can be binarized using only deterministic polynomial time in the value of p's fan-out.
our bounded binarization algorithm shows that the latter problem can be solved in polynomial time when the fan-out of the input LCFRS is bounded by some constant.
contrasting
train_20525
Many successful models of syntax are based on Probabilistic Context Free Grammars (PCFGs) (e.g., Collins (1999)).
directly learning a PCFG from a treebank results in poor parsing performance, due largely to the unrealistic independence assumptions imposed by the context-free assumption.
contrasting
train_20526
(2003)), a method which uses as productions all subtrees of the training corpus.
many of the DOP estimation methods have serious shortcomings (Johnson, 2002), namely inconsistency for DOP1 (Bod, 2003) and overfitting of the maximum likelihood estimate (Prescher et al., 2004).
contrasting
train_20527
This far surpasses the ML-PCFG (F1 of 70.7%), and is similar to Zuidema's (2007) DOP result of 83.8%.
it still well below state-of-the art parsers (e.g., the Berkeley parser trained using the same data representation scores 87.7%).
contrasting
train_20528
It is linguistically plausible that such structures are determined at least in part on the basis of the meaning of the related chunks of texts, and of the rhetorical intentions of their authors.
such knowledge is extremely difficult to capture.
contrasting
train_20529
al., 2008) focus primarily on news articles.
for us the development of the discourse parser is parasitic on our ultimate goal: developing resources and algorithms for language in-s1e1-s5e2 general:specific dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd dd d Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z Z s1e1 s2e1-s5e2 preparation: 1terfaces to instructional applications.
contrasting
train_20530
In many applications with just two classes, this is sufficient.
we are faced with a multi-classification problem.
contrasting
train_20531
Informally, our goal is to maximize each object's net happiness, which is computed by subtracting its membership score of the class it is not assigned to from its membership score of the class it is assigned to.
at the same time, we want to avoid assigning similar objects to different classes.
contrasting
train_20532
Citation texts have also been used to create summaries of single scientific articles in Qazvinian and Radev (2008) and Mei and Zhai (2008).
there is no previous work that uses the text of the citations to produce a multi-document survey of scientific articles.
contrasting
train_20533
This is similar to the concept of prestige in social networks, where the prestige of a person is dependent on the prestige of the people he/she knows.
since random walk may get caught in cycles or in disconnected components, we reserve a low probability to jump to random nodes instead of neighbors (a technique suggested by Langville and Meyer (2006)).
contrasting
train_20534
Our area includes the requisite French/English/German/Norwegian group, as well as the somewhat surprising Irish.
in addition to being intuitively plausible, it is not hard to find evidence in the literature for the contact relationship between English and Irish (Sommerfelt, 1960).
contrasting
train_20535
Our hierarchical Bayesian domain adaptation model is directed, as illustrated in Figure 1.
somewhat counterintuitively, the underlying (original) model of the data can be either directed or undirected, and for our experiments we use undi-rected, conditional random field-based models.
contrasting
train_20536
Probabilistic modeling has emerged as a dominant paradigm for these problems, and the EM algorithm has been a driving force for learning models in a simple and intuitive manner.
on some tasks, EM can converge slowly.
contrasting
train_20537
Online algorithms have the potential to speed up learning by making updates more frequently.
these updates can be seen as noisy approximations to the full batch update, and this noise can in fact impede learning.
contrasting
train_20538
For iEM, the time required to update µ with s i depends only on the number of nonzero components of s i .
the sEM update is µ ← (1−η k )µ+η k s i , and a naïve implementation would take time proportional to the total number of components.
contrasting
train_20539
Empirically, online methods are of-ten faster by an order of magnitude (Collins et al., 2008), and it has been argued on theoretical grounds that the fast, approximate nature of online methods is a good fit given that we are interested in test performance, not the training objective (Bottou and Bousquet, 2008;Shalev-Shwartz and Srebro, 2008).
in the unsupervised NLP literature, online methods are rarely seen, 5 and when they are, incremental EM is the dominant variant (Gildea and Hofmann, 1999;Kuo et al., 2008).
contrasting
train_20540
They can also be used for a variety of language processing tasks such as text categorization and information retrieval.
most documents do not provide keywords.
contrasting
train_20541
For exam-ple, the position of a phrase (measured by the number of words before its first appearance divided by the document length) is very useful for news article text, since keywords often appear early in the document (e.g., in the first paragraph).
for the less well structured meeting domain (lack of title and paragraph), these kinds of features may not be indicative.
contrasting
train_20542
For the graph-based methods, we notice that adding POS filtering also improves performance, similar to the TFIDF framework.
the graph method does not perform as well as the TFIDF approach.
contrasting
train_20543
Compared to the supervised results, the TFIDF approach is worse in terms of the individual maximum F-measure, but achieves similar performance when using the weighted relative score.
the unsupervised TFIDF approach is much simpler and does not require any annotated data for training.
contrasting
train_20544
A more principled approach to setting the costs would be to estimate from perceptual experiments or user studies what the impact of remaining in gap or overlap is compared to that of a cut-in or false interruption.
as a first approximation, the proposed cost structure offers a simple way to take into account some of the constraints of interaction.
contrasting
train_20545
The default decision boundary for this tagging task is 0.5 posterior probability (more likely than not), and tagging performance at that threshold is good (around 97% accuracy, as mentioned previously).
since this is a pre-processing step, we may want to reduce possible cascading errors by allowing more words into the sets B and E. In other words, we may want more precision in our set exclusion constraints.
contrasting
train_20546
Since there is no work required for these cells, the amount of work required to parse the sentence is reduced.
the quadratic bound does not include any potential reduced work in the remaining open cells.
contrasting
train_20547
For example, categories populating the cell spanning abc in position (1, 3) can be built in two ways: either by combining entries in cell (1, 1) with entries in (2, 3) at midpoint m = 1; or by combining entries in (1, 2) and (3, 3) at midpoint m = 2.
cell (1, 2) is closed, hence there is only one midpoint at which (1, 3) can be built (m = 1).
contrasting
train_20548
That mode of operation is useful for any model which purports to be potentially extensible to speech recognition or to model the human speech processor.
top-down parsers require exhaustive searches, meaning that they need to explore interpretations containing disfluency, even in the absence of syntactic cues for its existence.
contrasting
train_20549
This makes an assumption that repairs are always maximally local, which probably does not hurt accuracy, since most repairs actually are quite short.
this assumption is obviously not true in the general case, since in Figure 3 for example, the repair could trace all the way back to the S label at the root of the tree in the case of a restarted sentence.
contrasting
train_20550
Similar problems exist more widely throughout natural language processing where greedy based methods and heuristic beam search have been used in lieu of exact methods.
recently there has been an increasing interest in using Integer Linear Programming (ILP) as a means to find MAP solutions.
contrasting
train_20551
(2004) only consider sentences of up to eight tokens.
recent work (Riedel and Clarke, 2006) has shown that even exponentially large decoding problems may be solved efficiently using ILP solvers if a Cutting-Plane Algorithm (Dantzig et al., 1954) is used.
contrasting
train_20552
A simple log-linear form is used in SMT systems to combine feature functions designed for identifying good translations, with proper weights.
we often observe that tuning the weight associated with each feature function is indeed not easy.
contrasting
train_20553
Practically, as also shown in our experiments, we observe simplex-downhill usually gives better solutions over MER with random restarts for both, and reaches the solutions much faster in most of the cases.
simplex-downhill algorithm is an unconstrained algorithm, which does not leverage any domain knowledge in machine translation.
contrasting
train_20554
On our devset, we also observed that whenever optimizing toward TER (or mixture of TER & BLEU), MER does not seem to move much, as shown in Figure 1-(a) and Figure 1-(d).
on BLEU (NIST or IBM version), MER does move reasonably with random restarts.
contrasting
train_20555
One type regards words which refer specifically to the category name's meaning, such as pitcher for the category Baseball.
typical context words for the category which do not necessarily imply its specific meaning, like stadium, also come up as similar to baseball in LSA space.
contrasting
train_20556
It can be seen from Table 1 that using purely unigram features, the accuracy obtained is not any better than the majority classifier for qualified vs. bald distinction.
the Part-of-Speech bigram features and the not-in-scope features achieve a marginally significant improvement over the unigrams-only baseline.
contrasting
train_20557
Rubin et al (2008), for example, propose an ontology and annotation tool for semantic annotation of image regions in radiology.
creating a dataset of image regions manually annotated and delineated by domain experts, is a costly enterprise.
contrasting
train_20558
In the GMM mean supervector space, a naturally arising distance metric is the Euclidean distance metric.
it is observed that the supervectors show strong directional scattering patterns.
contrasting
train_20559
These compound classes are considered in the decoding process then projected back afterwards to recover the two types of frame↔FE connections.
some links are lost because decoding is sequential.
contrasting
train_20560
This task was quite simple, with glosses amenable to Web approaches, and is promising for automatically extending the coverage of a Malay lexicon.
we expect that the Malay glosses will block readings of Indonesian classifiers, and classifiers in other languages will require different strategies; we intend to examine this in future work.
contrasting
train_20561
The baseline algorithm has been found to be very useful in automatic speech recognition of agglutinative languages (Kurimo et al., 2006).
it often oversegments morphemes that are rare or not seen at all in the training data.
contrasting
train_20562
The communicative implications of accenting influence the interpretation of a word or phrase.
the acoustic excursions associated with accent are typically aligned with the lexically stressed syllable of the accented word.
contrasting
train_20563
This remains an area for future study.
if we accept that the feature representations accurately model the acoustic information contained in the regions of analysis and that the BURNC annotation is accurate, the most likely ex-planation for the superiority of word-based prediction over syllable-or vowel-based strategiesis is that the acoustic excursions correlated with accent occur outside a word's lexically stressed syllable.
contrasting
train_20564
(2008) using syntactic and acoustic components.
our experiments use only acoustic features, since we are concerned with comparing domains of acoustic analysis within the larger task of accent identification.
contrasting
train_20565
In fact, our corresponding result on the FrameNet corpus (Table 2) is P =0.784, R=0.571, F 1 =0.661, where the corpus contains much more data, its sentences come from a standard written text (no disfluencies are present) and it is in English language, which is morphologically simpler than Italian.
the Italian corpus includes optimal syntactic annotation which exactly fits the frame semantics, and the number of frames is lower than in the FrameNet experiment.
contrasting
train_20566
In the EBDM framework for task-oriented dialogs, agenda graph is manually designed to address two aspects of a dialog management: (1) Keeping track of the dialog state with a view to ensuring steady progress towards task completion, and (2) Supporting n-best recognition hypotheses to improve the robustness of dialog manager.
manually building such graphs for various applications may be labor intensive and time consuming.
contrasting
train_20567
Such a non-linear phenomenon can be implicitly captured by using the kernel trick.
its computational cost is very high, not only during training but also at inference time.
contrasting
train_20568
Voice Search applications provide a very convenient and direct access to a broad variety of services and information.
due to the vast amount of information available and the open nature of the spoken queries, these applications still suffer from recognition errors.
contrasting
train_20569
Automatic summarization has historically focused on summarizing events, a task embodied in the series of Document Understanding Conferences 1 .
there has also been work on entity-centric summarization, which aims to produce summaries from text collections that are relevant to a particular entity of interest, e.g., product, person, company, etc.
contrasting
train_20570
(2006) explores retrieval systems that align query results to highlight points of commonality and difference.
we attempt to identify contrasts from the data, and then generate summaries that highlight them.
contrasting
train_20571
The highest scoring contrastive pair of summaries would consist of one for x that mentions a exclusively, and one for y that mentions b exclusively -these summaries each mention a promiment aspect of their product, and have no overlap with each other.
they provide a false contrast because they each attempt to contrast the other summary, rather than the other product.
contrasting
train_20572
Therefore, our next algorithm aims to set the PA based on the ratio of neg to pos instances in the entire corpus.
since we don't have labels for the entire corpus, we don't know this ratio.
contrasting
train_20573
Syntax-based MT systems have proven effective-the models are compelling and show good room for improvement.
decoding involves a slow search.
contrasting
train_20574
It is not clear that a decoder such as ours, without the source-tree constraint, would benefit from this method, as building a context-free forest consistent with future language model integration via cubes is expensive on its own.
we see potential integration of both methods in two places: First, the merge-lists algorithm can be used to lazily process any nested for-loops-including vanilla CKYprovided the iterands of the loops can be prioritized.
contrasting
train_20575
Thus, these policies have similar motivation to the syntactic features in the McDonald (2006) model.
there is a fundamental difference in the way these policies are computed.
contrasting
train_20576
When the English speaker points towards himself, the system will switch to English-Iraqi translation.
when the Wii is pointed towards somebody else, the system will switch to Iraqi-English translation.
contrasting
train_20577
The previous results indicate that we require human translation references on day 1 data to get improved performance on day 2.
our goal is to make a better system on day 2 but try to minimize human efforts on day 1.
contrasting
train_20578
Our preliminary results show that coreferring on the basis of just one special word and one named entity for those names in "low" or "very low" does not lose more than 1,5% in precision, while it gains up to 40% in recall for these cases.
for "very high" perplexity two-token names we were able to increase precision by requiring a stronger similarity between contexts.
contrasting
train_20579
It is less likely that newer web pages from "sigir2009" can be ranked higher using features that implicitly favor old pages.
the fundamental problem is that current approaches have focused on improving general ranking algorithms.
contrasting
train_20580
For the simplest approach, the same threshold value, θ, can be applied to all the patterns.
we assumed that each pattern has its own optimal threshold value as its own confidence score, which is different from other patterns' threshold values.
contrasting
train_20581
(2008) addressed the 1 http://www.nist.gov/speech/tests/ace/ same task as we did in this paper.
to our knowledge, the language specific issue and feature contributions for Chinese event extraction have not been reported by earlier researchers.
contrasting
train_20582
Thus, segmentation is usually an indispensible step for further processing, e.g., Part-of-Speech tagging, parsing, etc.
the segmentation may cause a problem in some tasks, e.g., name entity recognition (Jing et al., 2003) and event trigger identification.
contrasting
train_20583
Comparing to the Chinese event extraction system reported by (Tan et al., 2008), our scores are much lower.
we argue that we apply much more strict evaluation metrics.
contrasting
train_20584
Despite of the multi-topic nature of the speeches, differences in training and test perplexities indicate that the topics in test are well represented in the training set (corpus statistics in Table 1).
the Protocols corpus is a collection of medical protocols.
contrasting
train_20585
A given translation is considered to be suitable if it can be manually post-edited with effort savings, i.e., the evaluator thinks that a manual post-editing will increase his productivity.
if the evaluator prefers to ignore the proposed translation and start it over, the sentence is deemed not suitable.
contrasting
train_20586
With respect to the Protocols corpus, as expected, results were found not so satisfactory.
human translators find themselves these documents complex.
contrasting
train_20587
All data sets were case-sensitive with punctuation marks preserved.
in a real-world application, identical language resources covering three or more languages are not necessarily to be expected.
contrasting
train_20588
For these six languages, all language pair combinations achieved the highest scores using the English pivot translation approach.
english is the pivot language of choice for only 16.2% (11 out of 68) of the language pairs when translating from/into Japanese, Korean, Indonesian, or Malay.
contrasting
train_20589
For Chinese, the choice of the optimal pivot language varies largely depending on the language direction.
the selection of the optimal pivot language is not symmetric for 34.5% of the language pairs, i.e., a different optimal pivot language was obtained for the SRC-TRG compared to the TRG-SRC translation task.
contrasting
train_20590
Thus, the classifier which determines the precedence relation is not enough.
an adequate rule can be inferred with an additional classifier trained to find good starting points: a temporal adjunct may appear as the first constituent in a sentence; if it is not chosen for this position, it should be preceded by the pronominalized subject (she), the indirect object (him) and the short non-pronominalized object (an email).
contrasting
train_20591
In English, the positions of required elements of a sentence, verb phrase or noun phrase are relatively fixed.
many sentences also include adverbials whose position is not fixed (Figure 1).
contrasting
train_20592
Since our main focus is on ro- bustness to speech recognition errors, our data set is limited to those questions that are worded very similarly to the candidate answers.
the approach is more general, and can be extended to tackle both challenges.
contrasting
train_20593
We have focussed principally on understanding meetings in terms of their lexical content, augmented by various multimodal streams.
in many interactions, the social signals are at least as important as the propositional content of the words (Pentland, 2008); it is a major challenge to develop meeting interpretation components that can infer and take advantage of such social cues.
contrasting
train_20594
The AMI corpus involved a substantial effort from many individuals, and provides an invaluable resource.
we do not wish to do this again, even if we are dealing with a domain that is significantly different, such as larger groups, or family "meetings".
contrasting
train_20595
It should be noted here that our focus is on improving parsing performance using a single underlying grammar class, which is somewhat orthogonal to the issue of parser combination, that has been studied elsewhere in the literature (Sagae and Lavie, 2006;Fossum and Knight, 2009;Zhang et al., 2009).
to that line of work, we also do not restrict ourselves to working with kbest output, but work directly with a packed forest representation of the posteriors, much in the spirit of Huang (2008), except that we work with several forests rather than rescoring a single one.
contrasting
train_20596
Our model is reminiscent of Logarithmic Opinion Pools (Bordley, 1982) and Products of Experts (Hinton, 2001).
3 because we believe that none of the underlying grammars should be favored, we deliberately do not use any combination weights.
contrasting
train_20597
In the example, one of the underlying grammars (G 1 ) had an imperfect recall score, because of its preference for flat structures (it missed an NP node in the second part of the sentence).
the other grammar (G 2 ) favors deeper structures, and therefore introduced a superfluous ADVP node.
contrasting
train_20598
Maximizing the expected number of correct productions is superior for F 1 score (see the one grammar case in Figure 6).
as to be expected, likelihood is better for exact match, giving a score of 47.6% vs. 46.8%.
contrasting
train_20599
They also demonstrate that training their model on WSJ allows them to accurately predict parsing accuracy on the BROWN corpus.
our models are trained over multiple domains to model which factors influence cross-domain performance.
contrasting