id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_10100
For example, one could model P(T money order) via P(T AE AE µ and P(T manifest itself) via P(T Î È Ö Ó µ.
because English words often have multiple parts of speech (e.g.
contrasting
train_10101
Given a dictionary marked with core parts of speech, it is trivial to generate hypothesized inflected forms following the regular paradigms, as shown in the left size of Figure 3.
due to irregularities and semi-regularities such as stem- a$ e$ Noun-Nomin-p3-fem-plur-indef e$ i$ Noun-Nomin-p3-fem-plur-indef ea$ ele$ Noun-Nomin-p3-fem-plur-indef i$ ile$ Noun-Nomin-p3-fem-plur-indef a$ ale$ Noun-Nomin-p3-fem-plur-indef $ $ Adj-masc,neut-sing $ā$ Adj-fem-sing $ i$ Adj-masc,neut,fem-plur $ e$ Adj-fem,neut-plur ru$ ra$ Adj-fem-sing ru$ ri$ Adj-masc,neut,fem-plur ru$ re$ Adj-fem-plur ... ... ... e$ $ Verb-Indic_Pres-p1-sing e$ i$ Verb-Indic_Pres-p2-sing e$ e$ Verb-Indic_Pres-p3-sing e$ em$ Verb-Indic_Pres-p1-plur e$ eţi$ Verb-Indic_Pres-p2-plur e$ $ Verb-Indic_Pres-p3-plur changes, such generation will clearly have substantial inaccuracies and overgenerations.
contrasting
train_10102
As theory predicts, unigram probability estimates for many words do converge as corpus size grows.
contrary to intuition, we found that for many commonplace words, for example tightness, there was no sign of convergence as corpus size approaches one billion words.
contrasting
train_10103
This shows the improvement in stability of the estimates from using large corpora, although ±20% is a considerable deviation from the goldstandard estimate.
other words, for instance tightness, fail spectacularly to converge to their final estimate before the influence of the forced convergence of the ratio starts to take effect.
contrasting
train_10104
There are also some content words that appear to be distributed evenly.
some words appear often in the first 5 million word sample but are not seen again in the remainder of the corpus.
contrasting
train_10105
The most obvious and expected conditioning of the random variables is the topic of the text in question.
it is hard to envisage seemingly topicneutral words, such as tightness and newly, being conditioned strongly on topic.
contrasting
train_10106
ρ is not a probability measure, but just a function of y and c into [0,1].
since a geometric mean reflects "uniformity" of the averaged values, ρ captures the degree to which p(y,c,X i ) values are high across all datasets.
contrasting
train_10107
This causes two important problems that motivate our work: 1) annotating data is often a di cult and costly task involving a lot of human work 1 , such that large collections of labelled data are di cult to obtain, and 2) interannotator agreement tends to be low i n e g g enomics collections (Krauthammer et al., 2000), thus calling for methods that are able to deal with noise and incomplete data.
unsupervised techniques do not require labelled data and can thus be applied regardless of the annotation problems.
contrasting
train_10108
With the introduction of unknown labels as supplementary optimisation variables, the constraints of the quadratic optimisation problem are now nonlinear, which m a k es solving more di cult.
approximated iterative algorithms exist which can e ciently train Transductive SVMs.
contrasting
train_10109
The context we consider contains the four preceding and the four following words.
we did not take into account the position of the words in the context, but only their presence in the right o r left context, and in addition we replaced, whenever possible, each w ord by a feature indicating (a) whether the word was part of the gene lexicon, (b) if not whether it was part of the protein lexicon, (c) if not whether it was part of the species lexicon, (d) and if not, whenever the candidate was neither a noun, an adjective nor a v erb, we replaced it by its part-of-speech.
contrasting
train_10110
One one hand, Fisher kernels are derived from information-geometric arguments (Jaakkola and Haussler, 1999) which require that the kernel reduces to an inner-product of Fisher scores.
polynomial and RBF kernels often display better performance than a simple dot-product.
contrasting
train_10111
Important issues wide open to discussion are: validation of results, psycho-linguistic relevance of the experimental setup, principled ways of surpassing the contextfree limitations of Lambek grammar (inherited in GraSp), just to mention a few.
already the spin-offs of our project (the collection of non-linguistic learners) do inspire confidence in our tenets, we 12 The learning experiment sketched in Moortgat (2001) shares some of GraSp's features.
contrasting
train_10112
• No assumption on the independence between dependency relations The probabilistic model assumes that depen-dency relations are independent.
there are some cases in which one cannot parse a sentence correctly with this assumption.
contrasting
train_10113
Japanese dependency relations are heavily constrained by such static features since the inflection forms and postpositional particles constrain the dependency relation.
when a sentence is long and there are more than one possible dependency, static features, by themselves cannot determine the correct dependency.
contrasting
train_10114
For the first impression, it may seems natural that higher accuracy is achieved with the probabilistic model, since all candidate dependency relations are used as training examples.
the experimental results show that the cascaded chunking model performs better.
contrasting
train_10115
A leading advantage of ME models is their flexibility: they allow stochastic rule systems to be augmented with additional syntactic, semantic, and pragmatic features.
the richness of the representations is not without cost.
contrasting
train_10116
Any degree of precision desired could be reached by any of the algorithms, with the appropriate value of ε.
gIS, say, would require many more iterations than reported in Table 2 to reach the precision achieved by the limited memory variable metric algorithm.
contrasting
train_10117
We typically think of computational linguistics as being primarily a symbolic discipline.
statistical natural language processing involves non-trivial numeric computations.
contrasting
train_10118
Perhaps the discrepancy (especially for the F l 8 9 G 6 E class) reflects differences in the way the training data was annotated; F l 8 9 G 6 E is a highly heterogenous class, and the criteria for distinguishing between The models described here are very simple and efficient, depend on no preprocessing or (with the exception of s w v ¦ ) external databases, and yet provide a dramatic improvement over a baseline model.
the performance is still quite a bit lower than results for industrial-strength language-specific named entity recognition systems.
contrasting
train_10119
Moreover, some elements of such multiwords (like president) are high frequency items, well known and of high significance for identifying a named entity.
there are elements that often appear in named entities, but are not characteristic for them (like the first name Israel).
contrasting
train_10120
During the run the error rate increases due to finding candidates and verification through misclassified items.
as the "era" example (see section 4) illustrates, misclassified items support the classification of goal items.
contrasting
train_10121
The maximum entropy models bene ted from extra feature which encoded capitalization information, positional information and information about the current word being part of a person name earlier in the text.
incorporating a list of person names in the training process did not help (Spanish test set: F =1 =73.66 Dutch test set: F =1 =68.08) Jansche (2002) employed a rst-order Markov model as a named entity recognizer.
contrasting
train_10122
One might conjecture that a supertagging approach could go a long way toward parse disambiguation.
an upper bound for such an approach for our corpus is below 55 percent parse selection accuracy, which is the accuracy of an oracle tagger that chooses at random among the parses having the correct tag sequence (Oepen et al., 2002).
contrasting
train_10123
Then, for instance, in the sentence: •(3)John talks to Mary with logical form •(4) talk-communicative-act(e,x,y), john(x), comm-to(y), mary(y) the verb talks has two arguments, the NP subject John, and the PP to Mary, as represented in the logical form associated with the verb, where the PP is the second argument and as such should be included in the subcategorisation frame of the verb: (S\NP)/PP.
in the sentence: •(5)Bob eats with a fork with logical form •(6) eat-ingest-act(e,x), bob(x), instrwith(e,y), a(y), fork(y) the PP with a fork is not an argument of the verb eat as reflected in its logical form and should not be included in its subcategorisation frame, which is S\NP.
contrasting
train_10124
In this case, the PP is an adjunct to the verb.
this preposition can be selected as the argument of the verb swim in sentence 9, which denotes a motion event and is an instance of type motion-act.
contrasting
train_10125
Recent studies on the automatic extraction of lexicalized grammars (Xia, 1999;Chen and Vijay-Shanker, 2000;Hockenmaier and Steedman, 2002a) allow the modeling of syntactic disambiguation based on linguistically motivated grammar theories including LTAG (Chiang, 2000) and CCG (Clark et al., 2002;Hockenmaier and Steedman, 2002b).
existing models of disambiguation with lexicalized grammars are a mere extension of lexicalized probabilistic context-free grammars (LPCFG) (Collins, 1996;Collins, 1997;Charniak, 1997), which are based on the decomposition of parsing results into the syntactic/semantic dependencies of two words in a sentence under the assumption of independence of the dependencies.
contrasting
train_10126
The first problem with LPCFG is partially solved by this model, since the dependencies not represented in LPCFG (e.g., long-distance dependencies and argument/modifier distinctions) are elegantly represented, while some relations (e.g., the control relation between "want" and "student") are not yet represented.
the other two problems remain unsolved in this model.
contrasting
train_10127
This model can represent all the syntactic/semantic dependencies of words in a sentence.
the statistical model is still a mere extension of LPCFG, i.e., it is based on decomposition into primitive lexical dependencies.
contrasting
train_10128
A possible solution to this problem is to directly estimate p(A|w) by applying a maximum entropy model (Berger et al., 1996).
such modeling will lead us to extensive tweaking of features that is theoretically unjustifiable, and will not contribute to the theoretical investigation of the relations of syntax and semantics.
contrasting
train_10129
Consequently, the probability model takes the following form.
this model has a crucial flaw: the maximum likelihood estimation of semantics probability is intractable.
contrasting
train_10130
A possible solution to this problem is to map SVMs' results into probabilities through a Sigmoid function, and use Viterbi search to combine those probabilities (Platt, 1999).
this approach conflicts with SVMs' purpose of achieving the so-called global optimization 1 .
contrasting
train_10131
As far as SVMs are concerned, the use of parses or pairs of parses both maximize the margin between x i1 and x ij , but the one using a single parse as a sample needs to satisfy some extra constraints on the selection of decision function.
these constraints are not necessary (see section 3.3).
contrasting
train_10132
E is the space of event of the classification problem, and For classifier g f on space E, its loss function is x 2 =x 1 and g f (x 1 , x 2 ) = +1 0 otherwise Therefore the expected risk R vote (f ) for the voting problem is equivalent to the expected risk R class (g f ) for the classification problem.
the definition of space E violates the independently and identically distributed (iid) assumption.
contrasting
train_10133
Variants of the Perceptron algorithm, which are known as Approximate Maximal Margin classifier, such as PAM (Krauth and Mezard, 1987), ALMA (Gentile, 2001) and PAUM (Li et al., 2002), produce decision hyperplanes within ratio of the maximal margin.
almost all these algorithms are reported to be inferior to SVMs in accuracy, while more efficient in training.
contrasting
train_10134
This seems to nullify our effort to create this new algorithm.
our new algorithm is still useful for the following reasons.
contrasting
train_10135
The main difference between Basilisk and Meta-Bootstrapping is that Basilisk scores each noun based on collective information gathered from all patterns that extracted it.
meta-Bootstrapping identifies a single best pattern and assumes that everything it extracted belongs to the same semantic class.
contrasting
train_10136
To find candidate seed words, we automatically identified 850 nouns that were positively correlated with subjective sentences in another data set.
it is crucial that the seed words occur frequently in our FBIS texts or the bootstrapping process will not get off the ground.
contrasting
train_10137
Bagga and Baldwin, 1998) has primarily approached personal name disambiguation using document context profiles or vectors, which recognize and distinguish identical name instances based on partially indicative words in context such as computer or car in the Clark case.
in the specialized case of personal names, there is more precise information available.
contrasting
train_10138
This would yield a set of cluster seeds, divided by features, which could then be used for further clustering.
this method relies on having a number of pages with extracted features that overlap from each referent.
contrasting
train_10139
The original EM method failed to boost the precision (76.78%) of the rule learned through only labeled data.
cV-EM and cV-EM2 boosted the precision to 77.88% and 78.56%.
contrasting
train_10140
Following this procedure, we should look up the thesaurus ID of the word 'suru'.
we do not look up the thesaurus ID for a word that consists of hiragana characters, because such words are too ambiguous, that is, they have too many thesaurus IDs.
contrasting
train_10141
In the many cases, the CV-EM2 outputs the same number as the CV-EM.
the basic idea is different.
contrasting
train_10142
The EM method failed to boost it, and degraded it to 73.56%.
by using CV-EM the precision was boosted to 77.88%.
contrasting
train_10143
In CV-EM, the best point is selected in cross validation, that is 3.
cV-EM2 estimates 0 by using the relation of three precisions: the initial precision, the precision for the iteration 1 and the precision at convergence.
contrasting
train_10144
For one word 'mae' in nouns and three words 'kuru ', 'koeru' and 'tukuru' in verbs, CV-EM was superior to CV-EM2.
for four words 'atama ', 'kaku n', 'te' and 'doujitsu' in nouns and four words 'ukeru', 'umareru', 'toru' and 'mamortu' in verbs, CV-EM2 was better to CV-EM.
contrasting
train_10145
In ideal estimation, the precision for noun words was boosted from 76.78% to 79.64% by the EM method, that is 1.037 times.
the precision for verb words was boosted from 78.16% to 79.92% by the EM method, that is 1.022 times.
contrasting
train_10146
Naive Bayes assumes the independence of features, too.
this assumption is not so rigid in practice.
contrasting
train_10147
Further results show that this form of co-training considerably outperforms self-training.
we find that simply retraining on all the newly labelled data can, in some cases, yield comparable results to agreement-based co-training, with only a fraction of the computational cost.
contrasting
train_10148
In speech processing, various adaption techniques have been proposed for language modeling.
the language modeling problem is essentially unsupervised (density estimation) in the sense that it does not require any annotation.
contrasting
train_10149
In addition, we also manually labeled the following data: It is perhaps not surprising that a sentence boundary classifier trained on WSJ does not perform nearly as well on some of the other data sets.
it is useful to examine the source of these extra errors.
contrasting
train_10150
Since is only an approximation of the conditional probability, this estimate may not be entirely accurate.
one would expect it to give a reasonably indicative measure of the classification performance.
contrasting
train_10151
If we can identify those data, then a natural way to enhance the underlying classifier's performance would be to include them in the training data, and then retrain.
a human is required to obtain labels for the new data, but our goal is to reduce the human labeling effort as much as possible.
contrasting
train_10152
This method helps to some extent, but not as much as we originally expected.
we have only used the simplest version of this method, which is susceptible to two problems mentioned earlier: it tends (a) to select data that are inherently hard to classify, and (b) to select redundant data.
contrasting
train_10153
using six types of editing (editing based on low and high value for all three criteria).
with the previous study, where for all tasks even the smallest editing led to significant accuracy decreases, for our task there was no clear decrease in performance.
contrasting
train_10154
First, we run the predictors on all tasks using all features.
with the previous study which favored the memory-based learner for almost all their tasks, our results favor IB1-IG for only two of the four tasks (ISCORR and STATUS).
contrasting
train_10155
We have previously shown that a broad set of 220 noisy features performs well in supervised verb classification (Joanis and Stevenson, 2003).
to Merlo and Stevenson (2001), we confirmed that a set of general features can be successfully used, without the need for manually determining the relevant features for distinguishing particular classes (cf.
contrasting
train_10156
Dorr and Jones, 1996;Schulte im Walde and Brew, 2002).
in contrast to Schulte im Walde and Brew (2002), we demonstrated that accurate subcategorization statistics are unnecessary (see also Sarkar and Tripasai, 2002).
contrasting
train_10157
3 In tests of the m easure on some contrived clusterings, we found it quite conservative, and on our experimental clusterings it did not often attain values higher than .25.
it is useful as a relative measure of goodness, in comparing clusterings arising from different feature sets.
contrasting
train_10158
The measure is independent of the true classification, and could be high when the other dependent measures are low, or vice versa.
it gives important information about the quality of a clustering: The other measures being equal, a clustering with a higher value indicates tighter and more separated clusters, suggesting stronger inherent patterns in the data.
contrasting
train_10159
This performance comparison tentatively suggests that good feature selection can be helpful in our task.
it is important to find a method that does not depend on having an existing classification, since we are interested in applying the approach when such a classification does not exist.
contrasting
train_10160
Many feature sets performed very well, and some far outperformed our best results using other feature selection methods.
across our 10 experimental tasks, there was no consistent range of feature ranks or feature set sizes that was correlated with good performance.
contrasting
train_10161
For FrameNet, the combined use of both collocation types achieves better performance for the individual classifiers: 70.3% versus 68.5%.
classification using a single classifier is not effective due to confusion among the fine-grained roles.
contrasting
train_10162
It enhances a WordNet synset with its contextual information and refines its relational structure, including relations such as hypernym, hyponym, antonym and synonym, by maintaining only those links that respect contextual constraints.
phraseNet is not just a functional extension of WordNet.
contrasting
train_10163
There is some evidence (Wessel et al., 2001) that this approach can give results that are at least as good as those obtainable with an external CE model.
cE as we present it here is not incompatible with traditional techniques, and has several practical advantages.
contrasting
train_10164
In order to abstract away from approximations made in deriving character-based probabilities p(k|x, h, s) used in the benefit calculation from word-based probabilities, we employed a specialized user model.
to the realistic model described in (Foster et al., 2002b), this assumes that users accept predictions only at the beginnings of words, and only when they are correct in their entirety.
contrasting
train_10165
Various morpheme extraction algorithms can be applied to the data.
the main advantage of our framework is that it presents the morphology algorithm of choice with a training set for particular linguistic features.
contrasting
train_10166
7 This accuracy compares favorably with the accuracy of 40% obtained for the raw hyponymy extraction experiments in Section 2, suggesting that inferring new relations by using corpus-based similarities to previously known relations is more reliable than trying to learn completely new relations even if they are directly attested in the corpus.
our accuracy falls way short of the figure of 82% reported by Widdows and Dorow (2002).
contrasting
train_10167
State-of-theart machine learning techniques including Support Vector Machines (Vapnik, 1995), AdaBoost (Schapire and Singer, 2000) and Maximum Entropy Models (Ratnaparkhi, 1998;Berger et al., 1996) provide high performance classifiers if one has abundant correctly labeled examples.
annotating a large set of examples generally requires a huge amount of human labor and time.
contrasting
train_10168
Combining a naive Bayes classifier with the EM algorithm is one of the promising minimally supervised approaches because its computational cost is low (linear to the size of unlabeled data), and it does not require the features to be split into two independent sets unlike cotraining.
the use of unlabeled data via the basic EM algorithm does not always improve classification performance.
contrasting
train_10169
We first transform the probability P (c k | x) using Bayes' rule, Class probability P (c k ) can be estimated from training data.
direct estimation of P (c k | x) is impossible in most cases because of the sparseness of training data.
contrasting
train_10170
For example, word occurrence is a commonly used feature for text classification.
obvious strong dependencies exist among word occurrences.
contrasting
train_10171
As described in the previous section, the naive Bayes classifier can be easily extended to exploit unlabeled data by using the EM algorithm.
the use of unlabeled data for actual tasks exhibits mixed results.
contrasting
train_10172
Their results showed that they gained a slight improvement by using a certain amount of unlabeled data.
test set accuracy began to decline as additional data were harvested.
contrasting
train_10173
The naive Bayes classifier can be combined with the wellestablished EM algorithm to exploit the unlabeled data .
the use of unlabeled data sometimes causes disastrous degradation of classification performance.
contrasting
train_10174
(2003) achieved the highest overall F β=1 rate.
the difference between their performance and that of the Maximum Entropy approach of Chieu and Ng (2003) is not significant.
contrasting
train_10175
While in the development set the NER module achieves a very good balance between precision and recall, in the test set the precision drops almost 4 points, being the F 1 results much worse.
development and test sets for German are much more similar.
contrasting
train_10176
A simple combination method is the equal voting method (van Halteren et al., 2001;Tjong Kim Sang et al., 2000), where the parameters are computed as λ i (w) = 1 n and P i (C|w, C i ) = δ (C, C i ), where δ is the Kronecker operator (δ (x, y) := (x = y?1 : 0)) -each of the classifiers votes with equal weight for the class that is most likely under its model, and the class receiving the largest number of votes wins.
this procedure may lead to ties, where some classes receive the same number of votes -one usually resorts to randomly selecting one of the tied candidates in this case - Table 2 presents the average results obtained by this method, together with the variance obtained over 30 trials.
contrasting
train_10177
In the voting methods, each classifier gave its entire vote to one class -its own output.
equation (2) allows for classifiers to give partial credit to alternative classifications, through the probability P i (C|w, C i ).
contrasting
train_10178
We did also try to incorporate gazetteer information by adding ¤ -gram counts from gazetteer entries to the training counts that back the above character emission model.
this reduced performance (by 2.0% with context on).
contrasting
train_10179
Typically, efficient inference procedures in both frameworks rely on dynamic programming (e.g., Viterbi), which works well in sequential data.
in many important problems, the structure is more general, resulting in computationally intractable inference.
contrasting
train_10180
It also trains the two classifiers separately as the basic approach.
it assumes that the entity classifier knows the correct relation labels, and similarly the relation classifier knows the right entity labels as well.
contrasting
train_10181
Therefore, the quality is not always good.
our global inference procedure, LP, takes the natural constraints into account, so it never generates incoherent predictions.
contrasting
train_10182
The aim of bootstrapping is to compensate for the paucity of labeled examples.
its potential danger is label 'contamination'namely, wrongly (automatically) labeled examples may misdirect the succeeding iterations.
contrasting
train_10183
This is not surprising since a small number of randomly chosen seeds provide much less information (corpus statistics) than high frequency seeds.
spectral's perfor- Medium Low All (spectral) 100 seeds 61.3 47.8 30.6 48.1 300 seeds 64.5 57.6 37.9 53.9 500 seeds 64.9 57.0 37.9 54.3 Figure 4: F-measure results (%) in relation to the choice of input vectors for spectral analysis.
contrasting
train_10184
Previous work has reached mixed conclusions; some suggest that combinations of syntactic and lexical features will perform most effectively.
others have shown that simple lexical features perform well on their own.
contrasting
train_10185
We used the training and test data divisions that already exist in the SENSEVAL-2 and SENSEVAL-1 data.
the line, hard, serve and interest data do not have a standard division, so we randomly split the instances into test (20%) and training (80%) portions.
contrasting
train_10186
If the target word line is used in the plural form, is preceded by a personal pronoun and the word following it is not a preposition, then it is likely that the intended sense is line of text as in the actor forgot his lines or they read their lines slowly.
if the word preceding line is a personal pronoun and the word following it is a preposition, then it is probably used in the product sense, as in, their line of clothes.
contrasting
train_10187
Such features are not expected to perform much better than the majority classifier.
when used in combination with other features, they may be useful.
contrasting
train_10188
Under some ideal conditions, where the optimal parameters can be identified, this is the highest improvement that can be achieved for the given labeled set.
most of the times, it may not be possible to find these optimal parameter values.
contrasting
train_10189
Even though we use a "local versus topical" feature split, which divides the features into two separate views on sense classification, there might be some natural dependencies between the features, since they are extracted from the same context, which may weaken the independence condition, and may sometime make the behavior of co-training similar to a self-training process.
as theoretically shown in (Abney, 2002), and then empirically in (Clark et al., 2003), co-training still works under a weaker independence assumption, and the results we obtain concur with these previous observations.
contrasting
train_10190
This occurs because in small and sparse data, direct first order features are seldom observed in both the training and the test data.
the indirect second order co-occurrence relationships that are captured by these methods provide sufficient information for discrimination to proceed.
contrasting
train_10191
The function we want to approximate is a mapping f from parser configurations to parser actions, where each action consists of a transition and (unless the transition is Shift or Reduce) a dependency type: Here Config is the set of all possible parser configurations and R is the set of dependency types as before.
in order to make the problem tractable, we try to learn a functionf whose domain is a finite space of parser states, which are abstractions over configurations.
contrasting
train_10192
Since parsing is a sentence-level task, we believe that the overall attachment score should be computed as the mean attachment score per sentence, which gives an estimate of the expected attachment score for an arbitrary sentence.
since most previous studies instead use the mean attachment score per word (Eisner, 1996;, we will give this measure as well.
contrasting
train_10193
Although some variants to the usual dot-product are sometimes used (for example, higher-order polynomial kernels and RBF kernels), the distribution of examples is not taken into account in such kernels.
new types of kernels have more recently been proposed; they are based on the probability distribution of examples.
contrasting
train_10194
So far, we have discussed the computational time for the kernel constructed on the Gaussian mixture.
the computational advantage of the kernel, in fact, is shared by a more general class of models.
contrasting
train_10195
The total number of categories is actually 116.
for small categories, reliable statistics cannot be obtained.
contrasting
train_10196
Before the first iteration the seed points are initialized to random values.
a bad choice of initial centers can have a great impact on performance, since -means is fully deterministic, given the starting seed points.
contrasting
train_10197
This remains to be proven conclusively (Dagan et al., 1999).
similarity-based techniques do not discard any data.
contrasting
train_10198
The backing off steps are shown in Figure 1.
if we use the similarity-based language model shown in (2) as a guide, then we can create a smoothed version of Collins' model using the weighted probability of all similar PPs (for brevity we use c in to indicate the context, in this case an entire PP quadruple): to the language model shown in (2), the set of similar contexts S(c) and similarity function α(c, c ) must be defined for multiple words (we abuse our notation slightly by using the same α and S for both PPs and words, but the meaning should be clear from the context).
contrasting
train_10199
Part of the problem is that words in the PP are replaced independently and without consideration to the remaining context.
we had hoped the specialist thesaurus might alleviate this problem by providing neighbours that are more appropriate for this specific task.
contrasting