id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_6100
When CLIR is considered simply as a combination of separate MT and IR components, the embedding of the two functions is not a problem.
as we explained in Section 1, there are theoretical motivations for embedding translation into the retrieval model.
contrasting
train_6101
This may lead to more reliable estimates for P(e | f ) and P(e | i) than the reverse.
further investigation is needed to confirm this, since differences in morphology could also contribute to the observed effect.
contrasting
train_6102
We noted above that faced with the new input in (2), a TM system might be able to present the translator with the fuzzy matches in (1).
if a translator were to set the level of fuzzy matching at 80% (a not unreasonable level), then neither of the translation pairs in (1) would be deemed to be a suitably good fuzzy match, as only 7/9 (77%) of the words in (1a) match those in (2) exactly, and only 3/9 (33%) of the words in (1b) match those in (2) exactly.
contrasting
train_6103
You can attach =⇒ Vous pouvez rélier (from (6a)) b. a mouse =⇒ une souris (from (6b)) c. to the connector =⇒ au connecteur (from (6a)) Recombining the French chunks gives us the correct translation in (9): (9) Vous pouvez rélier une souris au connecteur.
a number of mistranslations could also ensue, including those in (10): (10) a.
contrasting
train_6104
It may appear, therefore, that there are three chunks in the English string and only two on the French side, but this is not the case: The restriction that each segment must contain at least one non-marker word ensures that we have just two marker chunks for the English string in (18).
it remains the case that the chunks are tagged differently; we obtain the marker chunks in (19): (19) English: <DET> The man looks <PREP> at <DET> the woman French: <DET> L' homme regarde <DET> la femme Our alignment method would therefore align the first English chunk with the first French chunk, as their marker categories match.
contrasting
train_6105
A similar effect is seen where the databases of wEBMT were seeded with third-person singular verbs, of course.
we should expect an improvement in translation quality when both sets of verb forms are included in the memories of the system (see Experiment 2).
contrasting
train_6106
The major reason that translations fail to be produced in 6% of cases is the absence of a relevant generalized template.
for example, the unseen input her negative TV ads is generalized to (42): 42<POSS> negative TV ads the nearest relevant generalized template found in the system's memory is (43): That is, the template in (43) allows the insertion of any determiner, but no other marker word.
contrasting
train_6107
evaluated: Of course, if we consider (say) three-chunk combinations from either system A or B, the only possibilities are AAA or BBB, respectively.
the number of translations produced by the system is less significant than their quality.
contrasting
train_6108
With respect to system A, we can assume that our net gain would increase further.
we provide two other interpretations of the net gain of EBMT, calculated using the formula in (47): 47Net Gain = Coverage Percentage + K(Translation Quality) The term Translation Quality in (47) refers to the number of translations preferred by the human evaluator, excluding cases in which one system failed to produce a translation, which is already factored into the equation under the term Coverage Percentage.
contrasting
train_6109
the similar approach that we have taken with respect to (40), for instance).
in all of these cases this utility is lessened considerably given that either system B or system C produces a better translation.
contrasting
train_6110
The system simply attaches the translation with the highest weight to the existing chunk ordinateurs personnels to produce the mistranslation in (50): (50) *la ordinateurs personnels The problem of boundary friction is clearly visible here: We have inserted a feminine singular determiner into a chunk that was generalized from a masculine plural NP.
rather than output this wrong translation directly, we use a post hoc validation and (if required) correction process based on Grefenstette (1999).
contrasting
train_6111
This gives us the probabilities in (52): That is, the string empire est au is about 2.4 times more likely than the string empire sont au.
the count for the second bigram in each example in (52) can of course be discounted, as the juxtaposition of sont or est with au bears no relevance to the correctness or otherwise of the translations in (51).
contrasting
train_6112
Also, it is not a balanced corpus, as it contains material from only one genre, namely, news text.
the text originates from a variety of sources (Los Angeles Times, Washington Post, New York Times News Syndicate, Reuters News Service, and Wall Street Journal).
contrasting
train_6113
Table 3 shows a subset of the dependencies MINIPAR outputs for the sentence The fat cat ate the door mat.
to Gsearch and Cass, MINIPAR produces all possible parses for a given sentence.
contrasting
train_6114
Our goal was to demonstrate that frequencies retrieved from the Web are a viable alternative to conventional smoothing methods when data are sparse; we do not claim that our Web-based method is necessarily superior to smoothing or that it should be generally preferred over smoothing methods.
the next section will present a small-scale study that compares the performance of several smoothing techniques with the performance of Web counts on a standard task from the literature.
contrasting
train_6115
The joint probability model performs worse than the conditional model, at 81.1%.
this is still significantly better than the best result of Clark and Weir (2002) (χ 2 (1) = 63.14, p < .01).
contrasting
train_6116
The chapters by Worden, Batali, Kirby, and Hurford (see Table 1), whom I shall refer to collectively as WBKH, use computer simulation to demonstrate that agents with no innate syntactic structure can interact to create and preserve both the lexicon and syntax of languages over many generations.
the chapters by Niyogi, Turkel, and Briscoe (collectively, NTB) all base their simulations on an innate Universal Grammar-the general form of grammar is built into the agents, and the lexicon is given little or no importance.
contrasting
train_6117
Such structures might seem advantageous in allowing the semantics of the example to be computed directly by compositional rules and defeasible inference.
both structures are directed acyclic graphs (DAGs), with acyclicity the only constraint on what nodes can be connected.
contrasting
train_6118
Not only does he eat them for dinner.
c. he's allergic to them.
contrasting
train_6119
Again, we have abstracted over the individual, as the presupposed defeasible rule associated with the present-tense sentence appears to be more general than a statement about a particular individual.
15 in the following example illustrating a presupposed defeasible rule and a discourse relation associated with adjacency, it seems possible for the presupposed defeasible rule to be about John himself: (57) John is discussing politics.
contrasting
train_6120
For example, suppose you needed some money: You'd only have to ask him for it.
he's hard to find.
contrasting
train_6121
(A more precise interpretation would distinguish between the direct and epistemic causality senses of because, but the derivation would proceed in the same way.)
with (66a), example (66b) employs an auxiliary tree (β:punct1) anchored by a period (Figure 13).
contrasting
train_6122
A binary-branching (Chomsky adjunction) grammar can generate an unlimited number of adjuncts with very few rules.
for example, the following grammar generates any sequence VP → V NP PP*: the Penn Treebank style would create a new rule for each number of PPs seen in training data.
contrasting
train_6123
Similar explanations can be given for other classification decisions.
mB uses only the instances in A and B to construct a classifier.
contrasting
train_6124
This is clearly a simplification, since one would expect F(c, f , v) to vary for different verb classes.
note that according to this estimation, F(f , c) will vary across frames reflecting differences in the likelihood of a class being attested with a certain frame.
contrasting
train_6125
One might expect an accuracy of 100%, since these verbs can be disambiguated solely on the basis of their frame.
the performance of our model achieves a lesser accuracy, mainly because of the way we estimate the terms P(c) and P(f |c): We overemphasize the importance of class information without taking into account how individual verbs distribute across classes.
contrasting
train_6126
Four types of alternations were investigated in this study.
levin lists 79 alternations and approximately 200 classes.
contrasting
train_6127
The models proposed by Chao and Dyer (2000) and Ciaramita and Johnson (2000) are not directly applicable to Levin's classification, as the latter is not a hierarchy (and therefore not a DAG) and cannot be straightforwardly mapped into a Bayesian network.
in agreement with Chao and Dyer and Ciaramita and Johnson, we show that prior knowledge about class preferences improves word sense disambiguation performance.
contrasting
train_6128
As it is very hard to find all of the words in the original corpus that would be found meaningful by a Chinese person, it is very hard to count recall in the traditional way, that is, the number of meaningful words extracted divided by the number of all meaningful words in the original data.
it is also impossible to approach traditional precision and traditional recall by comparing the hand-segmented sample sentences and the automatically segmented sentences, as people usually do, because our method does not touch upon segmentation.
contrasting
train_6129
The performance trends that were observed in Table 2 can be also observed here.
as this corpus is much larger than the previous one, many characters have the chance to occur together to form spurious words.
contrasting
train_6130
Therefore, was extracted as a word.
if we list too many characters as adhesive characters, the partial recall will be degraded.
contrasting
train_6131
The two coders still disagree on 25 occurrences of Okay.
one coder now labels 10 of those as Accept and the remaining 15 as Ack, whereas the other labels the same 10 as Ack and the same 15 as Accept.
contrasting
train_6132
Siegel and Castellan's κ S&C is not affected by bias, whereas Cohen's κ Co is.
it is questionable whether the assumption of equal distributions underlying κ S&C is appropriate for coding in discourse and dialogue work.
contrasting
train_6133
The definition proposed by Grosz, Joshi, and Weinstein allows inferable entities, that is, entities that are not expressed at the surface level of the utterance or immediately recoverable from the subcategorization properties of the verb (as, for example, zero pronouns are) to constitute centers of an utterance.
the theory does not make explicit the parameters within which to characterize the class of permissible inferable elements or the constraints on doing so.
contrasting
train_6134
The drawback of this approach is that grammars are by their very nature application and domain specific.
automatic learning techniques may be adopted to learn from available examples.
contrasting
train_6135
The vocabulary of the new representation of the German part of the Verbmobil corpus, for example, in which full word forms are replaced by base form plus morphological and syntactic tags (lemmatag representation), is one and a half times as large as the vocabulary of the original corpus.
the information in the lemma-tag representation can be accessed gradually and ultimately reduced: For example, certain instances of words can be considered equivalent.
contrasting
train_6136
This is greater than the probability PT P of the pair una camera doppia/a room with two beds, 1.0 • 0.4 • 1.0 = 0.4.
the Viterbi score VT P for the first pair is 1.0 • 0.3 • 1.0 = 0.3, which is lower than the Viterbi score VT P for the second pair, 1.0 • 0.4 • 1.0 = 0.4.
contrasting
train_6137
It is well-known that a 1-gram entails a maximum generalization, allowing (extended) words to follow one another.
for sufficiently large m, a (nonsmoothed) m-gram is just an exact representation of the training strings (of extended words, in our case).
contrasting
train_6138
This could lead to addition of new, superflous strings, provided the states that to which we add transitions are reentrant/confluence.
the algorithm excludes such cases.
contrasting
train_6139
Most of the speedup was not the result of using an algorithm optimized for sorted data-an improvement to the algorithm for adding strings in Carrasco and Forcada (2002) consisting in avoiding unnecessary cloning of prefix states (as described in section 3.2 and mentioned on page 215 in Carrasco and Forcada [2002] as a suggestion from one of Carrasco and Forcada's reviewers) was 3.12 and respectively 2.35 times faster than the original algorithm.
the new algorithm is still the fastest.
contrasting
train_6140
Specifically, if you already work with LEXC or XFST, this is the bible of the applications.
if you are interested in the mathematics of finite-state networks or in the computational aspects of their implementations, you are probably better off with a book such as Roche and Schabes (1997).
contrasting
train_6141
A minor note: The book is extremely well-written, its language is fluent and lucid, and hard as I tried, I could not find a single error or typo.
the repeated references to Xerox, the Xerox linguists, and the Xerox developers of the technology described in the book are exaggerated.
contrasting
train_6142
The Collins Parser is a fully supervised, history-based learner that models the parameters of the parser by taking statistics directly from the training data.
pLTIG's expectation-maximization-based induction algorithm is partially supervised; the model's parameters are estimated indirectly from the training data.
contrasting
train_6143
The baseline case requires an average of about 38,000 brackets in the training data.
to induce a grammar that reaches the same 80% parsing accuracy with the examples selected by f unc , the learner requires, on average, 19,000 training brackets.
contrasting
train_6144
Current extraction and retrieval technology focuses almost exclusively on the subject matter of documents.
additional aspects of a document influence its relevance, including evidential status and attitude (Kessler, Nunberg, Schütze 1997).
contrasting
train_6145
She uses machine learning to select among manually developed features.
the focus in our work is on automatically identifying features from the data.
contrasting
train_6146
Although Strong C1 is still not verified if we consider all 669 segments of text that contain NPs, the number of utterances that satisfy Strong C1 (264) is slightly larger than the number of those that don't (260).
identifying utterances with sentences also has several negative (if small) effects.
contrasting
train_6147
As shown in Table 12, With this instantiation (henceforth, IS) 390 utterances out of 669 (58.3%) satisfy Strong C1, and 177 (26.5%) violate it-significantly better than the instantiation with u=s and direct realization (henceforth, DS).
the number of utterances with more than one CB almost doubles again (and with respect to the DS instantiation), to 26 (3.9%) from 14 (2.1%).
contrasting
train_6148
As a result, the percentage of "continuous" transitions (Kibble 2001)-EST, CON, RET, SSH, and RSH-increases.
rSH and SSH increase, as well as EST and CON: In the IF instantiation with PrO2s, EST are the most common transition (20.2%), but in the IS instantiation, rSH are (18.4%).
contrasting
train_6149
The improvements are greater with the u=s instantiations, since with sentences it's more common for more than one CF to be realized in the same grammatical position; for example, in the IS instantiations in which PRO2s are considered as realizations of CFs, we find that 5.7% of utterances (38/669) have more than one CB.
strong C1 is already verified with these instantiations, even with simple grammatical function.
contrasting
train_6150
This claim is further supported by the evidence concerning the Repeated Name Penalty (Gordon, Grosz, and Gillion 1993).
the RNP is observed only in a subset of the cases that would be considered as CB mentions according to the definition provided by Constraint 3, and in the example we are discussing (10), neither Branicki nor the cupboard occur in (u229) in a position that would be subject to RNP effects according to Gordon et al.
contrasting
train_6151
But the third term may increase.
we can show that any increase in the third term is more than offset in the labeling step.
contrasting
train_6152
In the current context, there is also justification for pursuing agreement on unlabeled data.
the Yarowsky algorithm does not assume the existence of two conditionally independent views of the data.
contrasting
train_6153
1 Of course coherence of a text depends on the realization of rhetorical relations (Mann and Thompson 1987) as well as referential continuity, and the latter is to an extent a byproduct of the former, as clauses that are rhetorically related also tend to mention the same entities.
even when a set of facts is arranged in a hierarchical RST structure, there are still many possible linear orderings with noticeable differences in referential coherence.
contrasting
train_6154
Karamanis (2001), Kibble (2001), and Beaver (2004), have argued for a ranking of the centering principles as opposed to weighting, and indeed Beaver provides a unified formulation of the centering rules and constraints as a ranked set of OT constraints.
we believe that such a ranking stands in need of empirical justification, and Beaver's data actually provide little evidence for strict ranking as opposed to weighting of constraints (see Kibble 2003).
contrasting
train_6155
For GL, a drastic improvement can be observed for all subclasses.
for very short words of BL, only a modest improvement is obtained.
contrasting
train_6156
For negative z, the two functions behave quite differently.
f ðzÞ ¼ e Àz shows an exponentially growing cost function as z Y À V. as z Y ÀV it can be seen that logð1 þ e Àz Þ Y logðe Àz Þ ¼ Àz, so this function shows asymptotically linear growth for negative z.
contrasting
train_6157
In the worst case, when every feature chosen appears on every training example, then C/ T = 1, and the two algorithms essentially have the same running time.
in sparse feature spaces there is reason to believe that C/ T will be small for most iterations.
contrasting
train_6158
First we recap the definition of the probability of a particular parse x i,q given parameter settingsā a: Pðx i,q j s i ,ā aÞ ¼ e Fðx i,q ,ā aÞ P j¼1 n i e Fðx i,j ,ā aÞ Recall that the log-loss is Unfortunately, unlike the case of ExpLoss, in general an analytic solution for BestWt does not exist.
we can define an iterative solution using techniques from iterative scaling (Della Pietra, Della Pietra, and Lafferty 1997).
contrasting
train_6159
Only those features which co-occur with k à on some example will need to have their values of BestWt and BestLoss updated.
this observation does not lead to an efficient algorithm: Updating these values is much more expensive than in the ExpLoss case.
contrasting
train_6160
Similarly to FrameNet, PropBank also attempts to label semantically related verbs consistently, relying primarily on VerbNet classes for determining semantic relatedness.
there is much less emphasis on the definition of the semantics of the class that the verbs are associated with, although for the relevant verbs additional semantic information is provided through the mapping to VerbNet.
contrasting
train_6161
The original treebank syntactic tree contains a trace which would allow one to recover this relation, coindexing the empty subject position of support with the noun phrase Big investment banks.
our automatic parser output does not include such traces.
contrasting
train_6162
Note that (12) contains some words (e.g., article and treaty) that do not actually occur with position bright, approveÀ in the corpus.
as these words actually occur with most of the positions that are similar to bright, approveÀ, we may assume that they satisfy the condition of this particular position.
contrasting
train_6163
In (15), a single set of words is associated with the two positions, since the positions have in common the same semantic condition (or selection restrictions).
the scope of the condition is still too narrow: It merely embraces two positions.
contrasting
train_6164
Words are weighted considering their dispersion (global weight) and their conditional probability given a position (local weight).
the weight Assoc, measuring the degree of association between word w and position p, is computed by equation 16: the conditional probability P MLE is estimated by using the maximum likelihood estimate (MLE), which is calculated in (17): where f ðp, wÞ represents the frequency of word w appearing in position p, and FðpÞ is defined, for a particular position, as the total sum of its word frequencies:~i f ðp, w i Þ.
contrasting
train_6165
Learned clusters represent linguistic requirements that cannot be reduced to a smaller set of general syntactico-semantic roles, such as Agent, Patient, Theme, and Instrument.
they cannot be associated with word-specific roles like, for instance, Reader, Eater, and Singer.
contrasting
train_6166
They are situated, in fact, at the domain-specific level, which is considered more appropriate for use in computational tasks (Gildea and Jurafsky 2002).
given the too-restrictive constraints of the algorithm, the clustering method also overgenerates redundant clusters.
contrasting
train_6167
Similarly to our approach, these methods follow both the relative view on word similarity and the assumption on contextual word sense, which were introduced above in sections 3.3 and 3.4, respectively.
these methods differ from ours in several aspects.
contrasting
train_6168
As in Clustering 1 (section 7.1), each basic cluster selects only those words occurring in all positions of the cluster.
allegrini, Montemagni, and Pirrelli (2000) define a second clustering step involving significant differences with regard to our Clustering 2.
contrasting
train_6169
These methods, then, follow both the absolute view on word similarity and Harris's distributional hypothesis, which we introduced in section 2.3.
in order to make the absolute view more relative, a collection of small and tight clusters (called committees) is proposed in a first step.
contrasting
train_6170
On the one hand, the words occurring in p are called the cohorts of w. The cohorts are similar to w only with regard to position p (relativized view).
a corpus-based thesaurus is used to select words similar to w with regard to its whole position distribution (absolute view).
contrasting
train_6171
There are no significant differences between the scores obtained from corpus EC and those from PGR, CR, for instance, obtains very similar F-scores over the two corpora.
there are important differences among the precision values associated with the three phrase sequences.
contrasting
train_6172
Clearly, the study of schema-based argumentation is a worthwhile endeavor.
it would have been better if the book had, at least in part, considered a combination of these approaches, rather than swinging completely to the ''rhetorical approach only'' side.
contrasting
train_6173
I would recommend this book to graduate students or researchers who intend to work in argumentation.
at the same time, I would suggest reading the book critically and complementing it with material about quantitative and probabilistic methods and user modeling.
contrasting
train_6174
One of the first proposals made was an analysis of German scrambling data using nonlocal MCTAG with additional dominance constraints (Becker, Joshi, and Rambow 1991).
the formal properties of nonlocal MCTAG are not well understood, and it is assumed that the formalism is not polynomially parsable.
contrasting
train_6175
Whether this grammar is considered to be a general RSN-MCTAG or an RSN-MCTAG of arity four does not matter in this case, since even in the general case, all possible SN-derivation structures are of arity four.
in the case of other RSN-MCTAGs, the restriction to a certain arity might exclude certain TAG derivation trees and thereby decrease the language generated by the grammar.
contrasting
train_6176
Finally, we were surprised that the number of concepts per word was almost identical for the five English dictionaries tested (see Table 2).
we found that this was not always the case, since further trials on six other English dictionaries gave a larger range of values, shown in Table 4, varying from 1.23 to 1.55.
contrasting
train_6177
Furthermore, accounts of discourse structure vary greatly with respect to how many discourse relations they assume, ranging from 2 (Grosz and Sidner 1986) to over 400 different coherence relations (reported in Hovy and Maier [1995]).
hovy and Maier (1995) argue that, at least for informationallevel accounts, taxonomies with more relations represent subtypes of taxonomies with fewer relations.
contrasting
train_6178
Then he disappeared in a liquor store.
to cause-effect relations, there is no causal relation between the events described by the two discourse segments.
contrasting
train_6179
Structural discourse relations are represented within a lexicalized tree-adjoining grammar framework, and the resultant structural discourse structure is represented by a tree.
more recently, Webber et al.
contrasting
train_6180
Note furthermore that the RST tree contains an evaluation-s relation between 6 and 1-5.
this evaluation-s relation seems to hold rather between 6 and 3-4: What is being evaluated is a chance for the Contras to win Table 6 Comparison for (26) of tree-based RST structure (from Carlson, Marcu, and Okurowski (2002) and our chain-graph-based structure.
contrasting
train_6181
the discussion of Knott [1996] in section 3).
there is a possible alternative hypothesis to Knott's (1996).
contrasting
train_6182
In sum, the statistical results on nodes with multiple parents suggest that they are a frequent phenomenon and that they are not limited to certain kinds of coherence relations.
as with crossed dependencies, removing certain kinds of coherence relations (elaboration and similarity) can reduce the mean in-degree of nodes and the proportion of nodes with in-degree greater than one.
contrasting
train_6183
Since Jean Carletta (1996) exposed computational linguists to the desirability of using chance-corrected agreement statistics to infer the reliability of data generated by applying coding schemes, there has been a general acceptance of their use within the field.
there are prevailing misunderstandings concerning agreement statistics and the meaning of reliability.
contrasting
train_6184
The application of agreement statistics has done much to improve the scientific rigor of discourse and dialogue research.
unless we understand what we are attempting to prove and which tests are appropriate, the results of evaluation can be unsatisfactory or, worse still, misleading.
contrasting
train_6185
On the one hand, the presence of numerous sources conveying the same information causes difficulties for end users of search engines and news providers; they must read the same information over and over again.
redundancy can be exploited to identify important and accurate information for applications such as summarization and question answering (Mani and Bloedorn 1997;Radev and McKeown 1998;Radev, Prager, and Samn 2000;Clarke, Cormack, and Lynam 2001;Dumais et al.
contrasting
train_6186
At one extreme, we might consider a shallow approach to the fusion problem, adapting the "bag of words" approach.
sentence intersection in a set-theoretic sense produces poor results.
contrasting
train_6187
At the other extreme, previous approaches (Radev and McKeown 1998) have demonstrated that this task is feasible when a detailed semantic representation of the input sentences is available.
these approaches operate in a limited domain (e.g., terrorist events), where information extraction systems can be used to interpret the source text.
contrasting
train_6188
This algorithm has several features in common with our method: It operates over syntactic dependency representations and employs recursive computation to find an optimal solution.
our method is different in two key aspects.
contrasting
train_6189
In this article we present an approach to automating the process of lexical acquisition for LFG (i.e., grammatical-function-based systems).
our approach also generalizes to CFG category-based approaches.
contrasting
train_6190
Dalrymple (2001) argues that there are cases, albeit exceptional ones, in which constraints on syntactic category are an issue in subcategorization.
to much of the work reviewed in Section 3, which limits itself to the extraction of surface syntactic subcategorization details, our system can provide this information as well as details of grammatical function.
contrasting
train_6191
They use, for example, a relatively coarse-grained notion of semantic compatibility between a few high-level concepts in WordNet (Soon, Ng, and Lim 2001), or more detailed hyponymy and synonymy links between anaphor and antecedent head nouns (Vieira and Poesio 2000;Harabagiu, Bunescu, and Maiorano 2001;Ng and Cardie 2002b, among others).
several researchers have pointed out that the incorporated information is still insufficient.
contrasting
train_6192
Thus, the desired link might be derived through the analysis of the gloss together with derivation of periodical from periodic.
such extensive mining of the ontology (as performed, e.g., by Harabagiu, Bunescu, and Maiorano [2001]) can be costly.
contrasting
train_6193
9 Using all senses of anaphor and potential antecedents in the search for relations might yield a link between an incorrect antecedent candidate and the anaphor due to an inappropriate sense selection.
considering only the most frequent sense for anaphor and antecedent (as is done in Soon, Ng, and Lim [2001]) might lead to wrong antecedent assignment if a minority sense is intended in the text.
contrasting
train_6194
We used a t-test to measure the difference between two algorithms in the proportion of correctly resolved anaphors.
there are many examples which are easy (for example, string-matching examples) and that therefore most or all algorithms will resolve correctly, as well as many that are too hard for all algorithms.
contrasting
train_6195
String matching is quite successful for coreference (baselineSTR v2n covers nearly 70% of the cases with a precision of 80.9%).
because algoWeb v4n never over- rules string matching, the errors of baselineSTR v2n are preserved here and account for 24.1% of all mistakes.
contrasting
train_6196
I gave the paper and stimulated some laughter.
sitting in the audience were Margaret Masterman and Frederick Parker Rhodes.
contrasting
train_6197
I was precocious, but very junior, so that my role was no more than that of a fly on the wall.
i was indeed present at a meeting in the office of David Hays at Rand when the name "computational linguistics" was settled upon.
contrasting
train_6198
In other applications, distributional similarity is taken to be an approximation to semantic similarity.
due to the wide range of potential applications and the lack of a strict definition of the concept of distributional similarity, many methods of calculating distributional similarity have been proposed or adopted.
contrasting
train_6199
Hindle and Rooth (1993) note that the correct decision depends on all four lexical events (the verb, the object, the preposition, and the prepositional object).
a statistical model built on the basis of four lexical events must cope with extremely sparse data.
contrasting