id
stringlengths
7
12
sentence1
stringlengths
6
1.27k
sentence2
stringlengths
6
926
label
stringclasses
4 values
train_6000
This would not be a fault if the authors were content to present their findings as purely descriptive.
they do venture a few theoretical claims that have little foundation and in fact contradict their own descriptive practice.
contrasting
train_6001
Secondly, there is no guarantee that the techniques employed will be applicable when a larger vocabulary is tackled.
it is likely that mark-up for a restricted vocabulary can be carried out more rapidly since the subject has to learn the possible senses of fewer words.
contrasting
train_6002
This method would tag all content words in a sentence with their senses from a dictionary that contains textual definitions.
it was found that the computations which would be necessary to test every combination of senses, even for a sentence of modest length, was prohibitive.
contrasting
train_6003
implementation this was done using a simple count of the number of words (tokens) in common between all the definitions for a given choice of senses.
this method prefers longer definitions, since they have more words that can contribute to the overlap, and short definitions or definitions by synonym are correspondingly penalized.
contrasting
train_6004
Then a corpus with an average polysemy of 5, and 2 senses marked correct on each ambiguous token, will have a baseline not less than 40%.
one with an average polysemy of 2, and only a single sense on each, will have a baseline of at least 50%.
contrasting
train_6005
This is because it is common for English words to act as either a noun or a verb and, since these are the most polysemous grammatical categories, the average polysemy count becomes large due to the cumulative effect of polysemy across grammatical categories.
words that can act as adjectives or adverbs are unlikely to be nouns or verbs.
contrasting
train_6006
One can legitimately wonder whether in fact the different knowledge sources for WSD are all ways of encoding the same semantic information, in a similar way that one might suspect transformation rules and statistics encode the same information about part-of-speech tag sequences in different formats.
the fact that an optimized combination of our partial taggers yields a significantly higher figure than any one tagger operating independently, shows that they must be orthogonal information sources.
contrasting
train_6007
We have already examined the usefulness of part-of-speech tags for semantic disambiguation in Section 3.
we now want to know the effect it has within a system consisting of several disambiguation modules.
contrasting
train_6008
It also remains an interesting question whether, because of the undoubted existence of novel senses in text, a sense tagger can ever reach the level that part-of-speech tagging has.
we believe we have shown that interesting combinations of WSD methods on a substantial training corpus are possible, and that this can show, among other things, the relative independence of the types of semantic information expressed by the various forms of lexical input.
contrasting
train_6009
1 Although speech lacks explicit demarcation of word boundaries, it is undoubtedly the case that it nevertheless possesses significant other cues for word discovery.
it is still a matter of interest to see exactly how much can be achieved without the incorporation of these other cues; that is, we are interested in the performance of a bare-bones language model.
contrasting
train_6010
The maximum likelihood approach due to Olivier (1968) is probabilistic in the sense that it is geared toward explicitly calculating the most probable segmentation of each block of input utterances (see also Batchelder 1997).
the algorithm involves heuristic steps in periodic purging of the lexicon and in the creation in the lexicon of new words.
contrasting
train_6011
As expected, the effect of reduced history is apparent from an in- In the trigrarn model, -log P(houselthe, dog) = 4.8 and -log P(doglin, the) = 5.4, giving a score of 10.2 to the segmentation dog house.
due to the error in transcription, the trigram in the doghouse is never encountered in the training data, although the bigram the doghouse is.
contrasting
train_6012
Generally, we would expect that causative verbs (in our case, the unergative and unaccusative verbs) would have a greater degree of overlap of nouns in subject and object position than non-causative transitive verbs (in our case, the object-drop verbs).
since the causative is a transitive use, and the transitive use of unergatives is expected to be rare (see above), we do not expect unergatives to exhibit a high degree of detectable overlap in a corpus.
contrasting
train_6013
~1 Assessing the percentage of verbs on which the experts agree gives us an intuitive measure.
this measure does not take into account how much the experts agree over the expected agreement by chance.
contrasting
train_6014
In fact, the accuracy of 69.5% achieved by the program is only 1.5% less than one of the human experts in comparison to the gold standard.
the algorithm still does not perform at expert level, as indicated by the fact that, for all experts, the lowest agreement score is with the program.
contrasting
train_6015
We conclude that the verb classification task is likely easier for very homogeneous classes, and more difficult for more broadly defined classes, even when the exemplars share the critical syntactic behaviors.
frequency does not appear to be a simple factor in explaining patterns of agreement between experts, or increases in accuracy.
contrasting
train_6016
We can see here that different frequency groups yield different classification behavior.
the relation is not simple, and it is clearly affected by the composition of the frequency group: the middle group contains mostly unaccusative and objectdrop verbs, which are the verbs with which our experts have the most difficulty.
contrasting
train_6017
13 This is a very important property of lexical acquisition algorithms to be used for lexicon organization, as their main interest lies in being applied to unknown words.
our approach is similar to the approaches of Siegel, and Lapata and Brew (1999), in attempting to learn semantic notions from distributions of 12 A baseline of 5% is reported, based on a closest-neighbor pairing of verbs, but it is not straightforward to compare this task to the proposed clustering algorithm.
contrasting
train_6018
Like Lapata and Brew, three of our indicators--TRANS, VBN, PASS--are based on the assumption that distributional differences in subcategorization frames are related to underlying verb class distinctions.
we also show that other syntactic indicators--cAUS and ANIM--can be devised that tap directly into the argument structure of a verb.
contrasting
train_6019
A header is then the layout equivalent of a discourse marker.
since headers cannot typically function as relationally as discourse markers (because they cannot readily point back to their point of origin), they tend instead to give sufficient information for the reader to make a connection--for example, by summarizing the nucleus of their segments.
contrasting
train_6020
This section has set out a procedure for producing layout structures from rhetorical structures.
the mapping remains highly nondeterministic.
contrasting
train_6021
This is a little forced, but useful for providing an overview of the lexicon in speech processing.
there is a strong element of Unix hacking, without a comprehensive view of traditional computational lexicology and no reference to computational lexicography as may be practiced by dictionary publishers (e.g., corpus evidence).
contrasting
train_6022
The editors have selected an interesting group of papers, and provide a clear introduction with useful summaries of the chapters, pointing out some interesting relationships between the different lines of research.
1 despite the high price of the book, there is no evidence that a competent professional copy editor was involved in the process of publication.
contrasting
train_6023
In addition, the book provides a wealth of descriptive information on GCI as well as many pointers to related work in theoretical linguistics.
unfortunately, the book lacks a computational or formal orientation.
contrasting
train_6024
At the same time, under the Unicode initiative, there has been solid progress in the definition of and, finally, the implementation of standards for computer encoding and rendering of scripts used around the world.
the implications of this multi-lingual revolution for computational linguistics beyond the level of word-processing have not been well explored, and Richard Sproat's book A Computational Theory of Writing Systems is a welcome contribution.
contrasting
train_6025
Beginning with the pioneering work of Hobbs (1978), many practical approaches rely on the availability of syntactic surface structure by employing coindexing restrictions, salience criteria, and parallelism heuristics (e.g., Lappin and Leass 1994).
even the assumption of the availability of a unique syntactic description is unrealistic since, in general, parsing involves the solution of difficult problems like at-tachment ambiguities, role uncertainty, and the instantiation of empty categories.
contrasting
train_6026
Clearly, this holds with respect to the candidates der and ihn, which are contained in the same surface structure fragment.
even regarding the two candidates Mann and Prf~sidenten that occur in the other fragment, there is no loss of evidence: since the reflexive pronoun is of binding-theoretic type A, and the fragment in which it occurs contains its binding category (the S node of the 7 this statement applies solely to the direct comparison of the involved occurrences, since in case of further, transitive coindexings, negative evidence stemming from decision interdependency may get lost (cf.
contrasting
train_6027
Regarding the errors made during the identification of specifying occurrences, this goal may be met by basing the measures on the sets 0++, 0+_, o+.7, 0+_, and 0+, that constitute the cases in which the base entity to be decided upon, namely, the pronoun occurrence P, has been determined in compliance with the key.
by further drawing the usual distinction between precision and recall, according to which, in the latter case, one must take into account empty anchors A as well, one obtains the following definitions: 34 JOq_q_ I Io++1 + Io+-I + Io+?1 Io++1 + Io+-I + Io+.71 + Io+_l Since errors in the occurrence identification are excluded from measurement at this stage of evaluation, and, moreover, it is assumed that there are no errors regarding the classification of occurrences as decision-relevant entities (i.e., pronouns), 3s it follows that, in any case, P > R. the characteristic trade-off relation between precision and recall holds anyway.
contrasting
train_6028
On the Mozart operas corpus, which, in this sense, is easier, the scores are considerably higher.
since different evaluation corpora have been used, and, moreover, since the precision figure mentioned in the results of Kennedy and Boguraev (1996) is not qualified with respect to text genre, a direct comparison of the empirical results should be based on further investigations.
contrasting
train_6029
For example, (Metaphor)l should be assigned "organization" but it is assigned "unknown" in (18), and (second-quarter)2 should be assigned "date" instead of "unknown" in (19).
correcting these classes will still not cause the noun phrases in the examples to corefer.
contrasting
train_6030
For example, JV-CHILD-i was changed to CHILD-/to decide whether i is a "unit" or a "subsidiary" of a certain parent company.
to RESOLVE, our system makes use of a smaller set of 12 features and, as in Aone and Bennett's (1995) system, the features used are generic and applicable across domains.
contrasting
train_6031
Grosz, Joshi, and Weinstein (1995) state that Cf may be ordered using different factors, but they only use information about grammatical roles.
both Strube (1998) and Strube and Hahn (1999) point out that it is difficult to define grammatical roles in free-word-order languages like German or Spanish.
contrasting
train_6032
Kessler exhorts everybody who does statistical testing to report all unsuccessful tests together with the successful ones.
he also notes that it is common in the literature on long-range comparisons to present only the most striking results.
contrasting
train_6033
If this were the only recently published book that brought together robust language and speech processing in an accessible way, then I would have no hesitation in recommending it as a convenient reference.
the book faces stiff competition from excellent books by Rayner el al.
contrasting
train_6034
[19941 and much research since then).
the publication of this volume does make previously difficult-tofind material more readily available and does provide an interesting historical context for contemporary research.
contrasting
train_6035
Accordingly, they have been considered one of the most robust and intriguing grammar submodules, usually referred to as binding theory.
in contrast to this, the formal and computational handling of binding constraints has presented considerable resistance.
contrasting
train_6036
Nevertheless, this solution turns out to be dependent on a top-down parsing strategy.
while Ingria and Stallard's implementation is independent of the parsing strategy adopted, its independence comes at the cost of still requiring a special-purpose postgrammatical parsing module for binding.
contrasting
train_6037
8 Like other types of constraints on semantic composition, binding constraints impose conditions on the interpretation of certain expressions-anaphors, in the present case-based on syntactic geometry.
this cannot be viewed as implying that they express grammaticality requirements.
contrasting
train_6038
Precision is the percentage of boundaries identified by an algorithm that are indeed true boundaries; recall is the percentage of true boundaries that are identified by the algorithm.
precision and recall are problematic for two reasons.
contrasting
train_6039
In both cases, the algorithms fail to match any boundary precisely; both receive scores of 0 for precision and recall.
algorithm a-0 is close to correct in almost all cases, whereas algorithm a-1 is entirely off, adding extraneous boundaries and missing important boundaries entirely.
contrasting
train_6040
This average increases with segment size, as we will discuss later, and changes if one assumes different distributions of false positives throughout the document.
this does not change the fact that in most cases, false positives are penalized some amount less than false negatives.
contrasting
train_6041
This method of penalizing false positives achieves this goal: the closer the algorithm's boundary is to the actual boundary, the less it is penalized.
overpenalizing false negatives to do this is not desirable.
contrasting
train_6042
One way to fix the problem of penalizing false negatives more than false positives is to double the false positive penalty (or halve the false negative penalty).
this would undermine the probabilistic nature of the metric.
contrasting
train_6043
From the above discussion, we know that WD is more lenient in situations where a false negative and a false positive occur near each other (where "near" means within a distance of k 2 ) than P k is.
p k is more lenient for pure false positives that occur close to boundaries.
contrasting
train_6044
The usual way of handling unknown morphemes is to guess all the possible POS tags for an unknown morpheme by checking connectable functional morphemes in the same eojeol (Kang 1993).
2 in this way, it is only possible to guess probable POS tags for a single unknown morpheme when it occurs at the beginning of an eojeol.
contrasting
train_6045
Rule schema Acronym description N1FT the tag of the first morpheme (FT) of the next eojeol (N1) P1LT the tag of the last morpheme (LT) of the previous eojeol (P1) N2FT the tag of the first morpheme (FT) of the eojeol after the next one (N2) N3FT the tag of the first morpheme (FT) of the second eojeol after the next one N3 of the statistical morpheme tagger.
designing the error correction rules with knowledge engineering is tedious and error prone.
contrasting
train_6046
Most tagging systems for Korean have applied flat, fixed tagsets and have suffered from using varying tagsets in various applications.
pOSTAG's tagsets, based on the over 100 finely differentiated pOS symbols for Korean are hierarchically organized and are flexibly reorganizable according to the application at hand.
contrasting
train_6047
and over (pages 330-331), extended from their basic senses that are presumed to be rooted in direct physical perception.
these do not give rise to productive procedures that could reliably generate extended senses for equivalent words in other languages, for instance, or other words denoting physical relations or body parts.
contrasting
train_6048
A recent issue of this journal described the state of the art in automatic sense disambiguation (Ide and Véronis 1998), and Senseval system competitions have revealed the immense difficulty of the task (http://www.sle.sharp.co.uk/senseval2).
no significant progress can be made on the computational aspects of polysemy without serious advances in theoretical issues.
contrasting
train_6049
In Chapter 2, "Aspects of the Micro-Structure of Word Meanings," D. Alan Cruse addresses the issue of the extreme context-sensitivity of word meaning, which can result in an almost infinite subdivision of senses.
cruse believes that there are "regions of higher semantic density" within this extreme variability, which he calls sense-nodules, "lumps of meaning with greater or lesser stability under contextual change."
contrasting
train_6050
The authors show that local context cues are very precise when they occur, but often simply do not occur.
topical context is very efficient in helping discriminate between homographs, but not very helpful for identifying the different senses of a polysemous word.
contrasting
train_6051
This resource should prove useful for researchers faced with these same decisions during corpus analysis.
the level of detail will sometimes overwhelm computational and corpus linguists attempting to develop automatic taggers and parsers.
contrasting
train_6052
For example, Flagstaff -my home town-is tagged as a common noun (page 87), because the word originally referred to a flagpole.
greenwood is tagged as a proper noun (page 88), because it was originally the surname of a colonist.
contrasting
train_6053
Here, issues of syntactic and semantic style are involved, as one can choose how the semantic content is to be incorporated.
the latter type of decision involves options that might have subtle semantic and stylistic differences but result in the same syntactic structure (though collocational and subcategorization structure can vary).
contrasting
train_6054
Dunning (1993) argues for the use of G 2 rather than X 2 , based on the claim that the sampling distribution of G 2 approaches the true chi-square distribution quicker than the sampling distribution of X 2 .
agresti (1996, page 34) makes the opposite claim: "The sampling distributions of X 2 and G 2 get closer to chi-squared as the sample size n increases.
contrasting
train_6055
On the one hand it is clear that people are sensitive to certain kinds of probabilistic information in building interpretations of ambiguous linguistic input and even in generating predictions of what is to come next (and recovering material that was unclear in the original, noisy input signal).
people clearly demonstrate the ability to apply rich and productive linguistic principles in dealing with complex and novel input.
contrasting
train_6056
If we know that a capitalized word that follows a period is a common word, we can safely assign such period as sentence terminal.
if we know that a period is not sentence terminal, then we can conclude that the following capitalized word is a proper name.
contrasting
train_6057
As one might predict, the most frequent sentence-starting common word was The.
this list also included some adverbs, such as Suddenly, and Once; some prepositions, such as In, to, and By; and even a few verbs: Let, Have, Do, etc.
contrasting
train_6058
Having established that, we can apply this information to nonevident contexts and classify Kong as a regular word throughout the document.
if we detect a context such as Kong., said, this indicates that in this document, Kong is normally written with a trailing period and hence is an abbreviation.
contrasting
train_6059
When we studied the distribution of capitalized words after capitalized abbreviations, we uncovered an interesting empirical fact.
a capitalized word that follows a capitalized abbreviation is almost certainly a proper name unless it is listed in the list of frequent sentence-starting common words (i.e., it is not The, etc.).
contrasting
train_6060
Both POS taggers and named-entity recognizers are normally built using the localcontext paradigm.
we opted for a method that relies on the entire distribution of a word in a document.
contrasting
train_6061
14 Class construction is then combinatorially very demanding and depends on frequency counts for joint events involving particular words, a potentially unreliable source of information as we noted above.
41 this is not very satisfactory because one of the goals of our work is precisely to avoid the problems of data sparseness by grouping words into classes.
contrasting
train_6062
"Early MT in France," by Maurice Gross, presents only a very brief sketch of the Paris project, which focused on Russian-French translation and terminated early in 1963 after Sestier became convinced that the task was too difficult to pursue further.
christian Boitet's article, "Bernard Vauquois' contribution to the Theory and Practice of Building MT Systems: A Historical Perspective," provides a relatively detailed picture of both the Grenoble project and the accomplishments of its leader in his various roles as researcher, teacher, MT system builder, and international figure in computational linguistics.
contrasting
train_6063
Researchers will be more interested in chapters 3, 5, and 6, whereas system designers will probably prefer chapters 7, 9, and 11.
chapters can be read in virtually any order (except for chapter 1, which should be read first, and chapter 12, which assumes prior reading of chapter 8).
contrasting
train_6064
This is not surprising, since we can see from Figure 2 that the peripheral vowels in the two vowel inventories are almost the same in the F 1 -F 2 plane.
for the six-vowel system produced in combination with any of the three fitness functions and the seven-vowel systems produced in combination with the first fitness function (F 1F ), there are some big differences between the two vowel inventories.
contrasting
train_6065
The bilingual sentences used in training are correct transcriptions of spoken dialogues.
they include spontaneous speech effects such as hesitations, false starts, and ungrammatical phrases.
contrasting
train_6066
Systems such as the SRI Core Language Engine (Moran 1988;Moran and Pereira 1992), LUNAR (Woods 1986), and TEAM (Martin, Appelt, and Pereira 1986) have employed scope critics that use heuristics to decide between alternative scopings.
the rules that these systems use in making quantifier scope decisions are motivated only by the researchers' intuitions, and no empirical results have been published regarding their accuracy.
contrasting
train_6067
Since the efficiency of the beam search approach is based on restrictions on the allowed coverage vectors C alone, the approach may be used for different types of translation models as well (e.g., for the multiword-based translation model proposed in Och, Tillmann, and Ney [1999]).
since the decoding problem for the IBM-4 translation model is provably NP-complete, as shown in Knight (1999) and Germann et al.
contrasting
train_6068
Therefore, the grammar coverage is broad, and it is feasible to use these methods in serious applications.
the translation task was evaluated only in the domain of air travel reservations.
contrasting
train_6069
Text-tospeech translation is an inevitable part of a speech-to-speech translation system.
text-to-speech technology is not considered in this book, and prosody translation and prosody transfer are described only briefly.
contrasting
train_6070
The trigger his induces the presupposition that a male individual has an apartment.
it does not presuppose that just any male person has an apartment, nor that some boxer or other creature owns an apartment.
contrasting
train_6071
Two operations on DRSs used in BAT are merge reduction and presuppositional binding, 2 and both require a precise definition of free and bound variables.
sentence-DRSs allow "ambiguous" bindings.
contrasting
train_6072
If we prove that ∀w((w,A) fo →(w,B) fo ), we know that B is not informative with respect to A.
if we are able to show that both ∃w((w,A) fo ∧ (w,B) fo ) and ∃w((w,A) fo ∧ ¬(w,B) fo ) are satisfiable formulas, we can say that B is informative with respect to A.
contrasting
train_6073
And just as prosody undoubtedly contributes to the meaning of utterances, so too does a text's graphical presentation contribute to its meaning.
although there is a long tradition and rich linguistic framework for describing and representing speech prosody (e.g., Halliday 1967;Chomsky and Halle 1968;Crystal 1969;Bolinger 1972;Pierrehumbert 1980;'t Hart, Collier, and Cohen 1990;Ladd 1996), the same is not true for text layout.
contrasting
train_6074
Mann and Thompson assert this explicitly (1987, page 4): Relations are defined to hold between two non-overlapping text spans, here called the nucleus and the satellite, denoted by N and S. where A text span is an uninterrupted linear sequence of text.
it is unclear whether their claim is intentional or simply the result of an informal style of presentation.
contrasting
train_6075
We have already seen this in the case of complex conditionals in example (12).
the phenomenon is much more widespread: In rare cases the treatment can be prolonged for another week; however, this is risky since • The side-effects are likely to get worse.
contrasting
train_6076
On first tackling indented structures, we followed precisely this reasoning, introducing vertical list as a text category.
despite its initial plausibility, this decision has several irritating consequences.
contrasting
train_6077
Therefore, it is banned by the FDA.
the FDA approves ElixirPlus.
contrasting
train_6078
Coordinating Conjunction: We will assume that a coordinating conjunction like but may link spans within a text-clause, or across text-clauses and textsentences, so that no constraint on the levels of A and B results.
the satellite must precede the nucleus, so we have position(B) = P 1 (and hence, by Sister Position, position(A) = P 2 ).
contrasting
train_6079
To analyze such an example informally may be a useful source of insights, but to attempt a complete formal analysis (and generation) of the page seems bold in the extreme.
despite this difference in approach, the framework that emerges from Bateman et al.
contrasting
train_6080
Version 15 (1) Elixir contains gestodene.
elixir is safe to use since • the medicine has been thoroughly tested • it has no significant side effects.
contrasting
train_6081
Version 17 (1) Elixir contains gestodene.
the medicine has been thoroughly tested and it has no significant side effects so Elixir is safe to use.
contrasting
train_6082
Version 33 (2) Elixir contains gestodene.
since • the medicine has been thoroughly tested • it has no significant side effects Elixir is safe to use.
contrasting
train_6083
Version 47 (3) Elixir contains gestodene.
the medicine has been thoroughly tested.
contrasting
train_6084
Consequently, Elixir is safe to use.
the medicine has been thoroughly tested.
contrasting
train_6085
Although conceptually elegant, Pustejovsky's (1995) theory of the generative lexicon does not aim to provide an exhaustive description of the telic roles that a given noun may have.
these roles are crucial for interpreting verb-noun and adjective-noun metonymies.
contrasting
train_6086
However, these roles are crucial for interpreting verb-noun and adjective-noun metonymies.
to Vendler (1968), who acknowledges that logical metonymies may trigger more than one interpretation (in other words, that there may be more than one possible event associated with the noun in question), Pustejovsky implicitly assumes that nouns or noun classes have one (perhaps default) telic role without, however, systematically investigating the relative degree of ambiguity of the various cases of logical metonymy (e.g., the out-of-context possible readings for fast scientist suggest that fast scientist exhibits a higher degree of semantic ambiguity than fast plane).
contrasting
train_6087
We evaluate our results by comparing the model's predictions to human judgments and show that the model's ranking of meanings correlates reliably with human intuitions.
the model is limited in its scope.
contrasting
train_6088
Furthermore, applying frequency cutoffs reduces the range of the obtained probabilities: only likely (but not necessarily plausible) interpretations are obtained with f (o, e) ≥ 4 and f (v, e) ≥ 4.
one of the aims of the experiment outlined below was to explore the quality of interpretations with varied probabilities.
contrasting
train_6089
The result indicates that our model is particularly good at deriving meanings corresponding to the argument bias for a given adjective.
the dispreferred interpretations also correlate significantly with human judgments, which suggests that the model derives plausible interpretations even in cases in which the argument bias is overridden.
contrasting
train_6090
Currently our models cannot estimate probabilities for word combinations unseen in the corpus, and WordNet could be used for re-creating the frequencies of these combinations (Lapata, Keller, and McDonald 2001;Clark and Weir 2001).
part of our aim here was to investigate whether it is at all possible to generate interpretations for metonymic constructions without the use of prior knowledge bases that might bias the acquisition process in uncontrolled and idiosyncratic ways.
contrasting
train_6091
For example, a fast plane is not only a plane (i.e., an aircraft) that flies, lands, or travels quickly, but also a plane (i.e., a surface) that transposes or rotates quickly and a plane (i.e., a tool) that smoothes something quickly.
more paraphrases are derived for the "aircraft" sense of plane; these paraphrases also receive a higher ranking.
contrasting
train_6092
This is not surprising, since the number of verbs related to the "aircraft" sense of plane are more frequent than the verbs related to the other two senses.
to fast plane, however, efficient plane should probably bias toward the "tool" sense of plane, even though the "aircraft" sense is more frequent in the corpus.
contrasting
train_6093
In such a critical and knowledge-rich environment, terms cannot be automatically added to the terminology as a consequence of language processing.
cimino describes how simple language processing, together with knowledge-based reasoning, can be used to guide a terminologist in the process of, for example, adding the name of a new drug.
contrasting
train_6094
One hundred million words is a large enough corpus for many empirical strategies for learning about language, either for linguists and lexicographers (Baker, Fillmore, and Lowe 1998; Kilgarriff and Rundell 2002) or for technologies that need quantitative information about the behavior of words as input (most notably parsers [Briscoe and Carroll 1997;Korhonen 2000]).
for some purposes, it is not large enough.
contrasting
train_6095
The largest appear to have been obtained by Chen and Nie (2000), who acquired collections on the order of 15,000 document pairs for English-French, English-German, English-Dutch, English-Italian, and English-Chinese using the PT-Miner system.
these collections have not been made available to the general community.
contrasting
train_6096
Until recently the difficulty with this solution has been that the collection of URLs deteriorates over time as sites disappear, pages are reorganized, and underlying content changes: For example, in April 2002, we attempted to download the documents in the STRAND English-French, English-Chinese, and English-Basque collections, and we were able to access successfully only around 67%, 43%, and 40% of the URL pairs, respectively.
the Internet Archive's Wayback Machine provides a way to distribute persistent URLs.
contrasting
train_6097
MT systems try to translate text into a well-readable form governed by morphological, syntactic, and semantic constraints.
current IR models are based on bag-of-words models.
contrasting
train_6098
For example, a user might formulate the query "Internet connection" in order to retrieve documents about computer networks, Internet service providers, or proxies.
under the current bag-of-words approach, the relevant documents containing these terms are unlikely to be retrieved.
contrasting
train_6099
Several other research groups (for example, the RALI lab at Université de Montréal) have also tried to acquire manually constructed parallel corpora.
manual collection of large corpora is a tedious task that is timeand resource-consuming.
contrasting