corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-668901 | cmp-lg/9608014 | Classifiers in Japanese-to-English Machine Translation | <|reference_start|>Classifiers in Japanese-to-English Machine Translation: This paper proposes an analysis of classifiers into four major types: UNIT, METRIC, GROUP and SPECIES, based on properties of both Japanese and English. The analysis makes possible a uniform and straightforward treatment of noun phrases headed by classifiers in Japanese-to-English machine translation, and has been implemented in the MT system ALT-J/E. Although the analysis is based on the characteristics of, and differences between, Japanese and English, it is shown to be also applicable to the unrelated language Thai.<|reference_end|> | arxiv | @article{bond1996classifiers,
title={Classifiers in Japanese-to-English Machine Translation},
author={Francis Bond (NTT), Kentaro Ogura (NTT), Satoru Ikehara (Tottori
University)},
journal={Proceedings of the 16th International Conference on Computational
Linguistics (COLING'96), pp 125--130.},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608014},
primaryClass={cmp-lg cs.CL}
} | bond1996classifiers |
arxiv-668902 | cmp-lg/9608015 | Morphological Productivity in the Lexicon | <|reference_start|>Morphological Productivity in the Lexicon: In this paper we outline a lexical organization for Turkish that makes use of lexical rules for inflections, derivations, and lexical category changes to control the proliferation of lexical entries. Lexical rules handle changes in grammatical roles, enforce type constraints, and control the mapping of subcategorization frames in valency-changing operations. A lexical inheritance hierarchy facilitates the enforcement of type constraints. Semantic compositions in inflections and derivations are constrained by the properties of the terms and predicates. The design has been tested as part of a HPSG grammar for Turkish. In terms of performance, run-time execution of the rules seems to be a far better alternative than pre-compilation. The latter causes exponential growth in the lexicon due to intensive use of inflections and derivations in Turkish.<|reference_end|> | arxiv | @article{sehitoglu1996morphological,
title={Morphological Productivity in the Lexicon},
author={Onur Sehitoglu and Cem Bozsahin (Middle East Technical University)},
journal={Proc. of the ACL'96 SIGLEX Workshop, Santa Cruz, 105--114},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608015},
primaryClass={cmp-lg cs.CL}
} | sehitoglu1996morphological |
arxiv-668903 | cmp-lg/9608016 | A Sign-Based Phrase Structure Grammar for Turkish | <|reference_start|>A Sign-Based Phrase Structure Grammar for Turkish: This study analyses Turkish syntax from an informational point of view. Sign based linguistic representation and principles of HPSG (Head-driven Phrase Structure Grammar) theory are adapted to Turkish. The basic informational elements are nested and inherently sorted feature structures called signs. In the implementation, logic programming tool ALE (Attribute Logic Engine) which is primarily designed for implementing HPSG grammars is used. A type and structure hierarchy of Turkish language is designed. Syntactic phenomena such a s subcategorization, relative clauses, constituent order variation, adjuncts, nomina l predicates and complement-modifier relations in Turkish are analyzed. A parser is designed and implemented in ALE.<|reference_end|> | arxiv | @article{sehitoglu1996a,
title={A Sign-Based Phrase Structure Grammar for Turkish},
author={Onur Tolga Sehitoglu},
journal={arXiv preprint arXiv:cmp-lg/9608016},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608016},
primaryClass={cmp-lg cs.CL}
} | sehitoglu1996a |
arxiv-668904 | cmp-lg/9608017 | Automatic Alignment of English-Chinese Bilingual Texts of CNS News | <|reference_start|>Automatic Alignment of English-Chinese Bilingual Texts of CNS News: In this paper we address a method to align English-Chinese bilingual news reports from China News Service, combining both lexical and satistical approaches. Because of the sentential structure differences between English and Chinese, matching at the sentence level as in many other works may result in frequent matching of several sentences en masse. In view of this, the current work also attempts to create shorter alignment pairs by permitting finer matching between clauses from both texts if possible. The current method is based on statiscal correlation between sentence or clause length of both texts and at the same time uses obvious anchors such as numbers and place names appearing frequently in the news reports as lexcial cues.<|reference_end|> | arxiv | @article{xu1996automatic,
title={Automatic Alignment of English-Chinese Bilingual Texts of CNS News},
author={Donghua Xu, Chew Lim Tan (National University of Singapore)},
journal={arXiv preprint arXiv:cmp-lg/9608017},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608017},
primaryClass={cmp-lg cs.CL}
} | xu1996automatic |
arxiv-668905 | cmp-lg/9608018 | Algorithms for Speech Recognition and Language Processing | <|reference_start|>Algorithms for Speech Recognition and Language Processing: Speech processing requires very efficient methods and algorithms. Finite-state transducers have been shown recently both to constitute a very useful abstract model and to lead to highly efficient time and space algorithms in this field. We present these methods and algorithms and illustrate them in the case of speech recognition. In addition to classical techniques, we describe many new algorithms such as minimization, global and local on-the-fly determinization of weighted automata, and efficient composition of transducers. These methods are currently used in large vocabulary speech recognition systems. We then show how the same formalism and algorithms can be used in text-to-speech applications and related areas of language processing such as morphology, syntax, and local grammars, in a very efficient way. The tutorial is self-contained and requires no specific computational or linguistic knowledge other than classical results.<|reference_end|> | arxiv | @article{mohri1996algorithms,
title={Algorithms for Speech Recognition and Language Processing},
author={Mehryar Mohri (AT&T Laboratories), Michael Riley (AT&T Laboratories),
and Richard Sproat (Bell Laboratories)},
journal={arXiv preprint arXiv:cmp-lg/9608018},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608018},
primaryClass={cmp-lg cs.CL}
} | mohri1996algorithms |
arxiv-668906 | cmp-lg/9608019 | Using sentence connectors for evaluating MT output | <|reference_start|>Using sentence connectors for evaluating MT output: This paper elaborates on the design of a machine translation evaluation method that aims to determine to what degree the meaning of an original text is preserved in translation, without looking into the grammatical correctness of its constituent sentences. The basic idea is to have a human evaluator take the sentences of the translated text and, for each of these sentences, determine the semantic relationship that exists between it and the sentence immediately preceding it. In order to minimise evaluator dependence, relations between sentences are expressed in terms of the conjuncts that can connect them, rather than through explicit categories. For an n-sentence text this results in a list of n-1 sentence-to-sentence relationships, which we call the text's connectivity profile. This can then be compared to the connectivity profile of the original text, and the degree of correspondence between the two would be a measure for the quality of the translation. A set of "essential" conjuncts was extracted for English and Japanese, and a computer interface was designed to support the task of inserting the most fitting conjuncts between sentence pairs. With these in place, several sets of experiments were performed.<|reference_end|> | arxiv | @article{visser1996using,
title={Using sentence connectors for evaluating MT output},
author={Eric M. Visser and Masaru Fuji (Fujitsu Laboratories Ltd.)},
journal={Proceedings of COLING-96 (Poster Sessions, pgs. 1066-1069)},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608019},
primaryClass={cmp-lg cs.CL}
} | visser1996using |
arxiv-668907 | cmp-lg/9608020 | Phonetic Ambiguity : Approaches, Touchstones, Pitfalls and New Approaches | <|reference_start|>Phonetic Ambiguity : Approaches, Touchstones, Pitfalls and New Approaches: Phonetic ambiguity and confusibility are bugbears for any form of bottom-up or data-driven approach to language processing. The question of when an input is ``close enough'' to a target word pervades the entire problem spaces of speech recognition, synthesis, language acquisition, speech compression, and language representation, but the variety of representations that have been applied are demonstrably inadequate to at least some aspects of the problem. This paper reviews this inadequacy by examining several touchstone models in phonetic ambiguity and relating them to the problems they were designed to solve. An good solution would be, among other things, efficient, accurate, precise, and universally applicable to representation of words, ideally usable as a ``phonetic distance'' metric for direct measurement of the ``distance'' between word or utterance pairs. None of the proposed models can provide a complete solution to the problem; in general, there is no algorithmic theory of phonetic distance. It is unclear whether this is a weakness of our representational technology or a more fundamental difficulty with the problem statement.<|reference_end|> | arxiv | @article{juola1996phonetic,
title={Phonetic Ambiguity : Approaches, Touchstones, Pitfalls and New
Approaches},
author={Patrick Juola (Oxford University, Dept. of Experimental Psychology)},
journal={CSNLP-96, Sept. 2-4, Dublin, Ireland},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608020},
primaryClass={cmp-lg cs.CL}
} | juola1996phonetic |
arxiv-668908 | cmp-lg/9608021 | Isolated-Word Confusion Metrics and the PGPfone Alphabet | <|reference_start|>Isolated-Word Confusion Metrics and the PGPfone Alphabet: Although the confusion of individual phonemes and features have been studied and analyzed since (Miller and Nicely, 1955), there has been little work done on extending this to a predictive theory of word-level confusions. The PGPfone alphabet is a good touchstone problem for developing such word-level confusion metrics. This paper presents some difficulties incurred, along with their proposed solutions, in the extension of phonetic confusion results to a theoretical whole-word phonetic distance metric. The proposed solutions have been used, in conjunction with a set of selection filters, in a genetic algorithm to automatically generate appropriate word lists for a radio alphabet. This work illustrates some principles and pitfalls that should be addressed in any numeric theory of isolated word perception.<|reference_end|> | arxiv | @article{juola1996isolated-word,
title={Isolated-Word Confusion Metrics and the PGPfone Alphabet},
author={Patrick Juola (Oxford University, Dept. of Experimental Psychology)},
journal={NeMLaP-96, Sept. 16-18, 1996, Ankara TURKEY},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9608021},
primaryClass={cmp-lg cs.CL}
} | juola1996isolated-word |
arxiv-668909 | cmp-lg/9609001 | Corrections and Higher-Order Unification | <|reference_start|>Corrections and Higher-Order Unification: We propose an analysis of corrections which models some of the requirements corrections place on context. We then show that this analysis naturally extends to the interaction of corrections with pronominal anaphora on the one hand, and (in)definiteness on the other. The analysis builds on previous unification--based approaches to NL semantics and relies on Higher--Order Unification with Equivalences, a form of unification which takes into account not only syntactic beta-eta-identity but also denotational equivalence.<|reference_end|> | arxiv | @article{gardent1996corrections,
title={Corrections and Higher-Order Unification},
author={Claire Gardent, Michael Kohlhase and Noor van Neusen},
journal={arXiv preprint arXiv:cmp-lg/9609001},
year={1996},
number={CLAUS Report Nr. 77},
archivePrefix={arXiv},
eprint={cmp-lg/9609001},
primaryClass={cmp-lg cs.CL}
} | gardent1996corrections |
arxiv-668910 | cmp-lg/9609002 | Inferring Acceptance and Rejection in Dialogue by Default Rules of Inference | <|reference_start|>Inferring Acceptance and Rejection in Dialogue by Default Rules of Inference: This paper discusses the processes by which conversants in a dialogue can infer whether their assertions and proposals have been accepted or rejected by their conversational partners. It expands on previous work by showing that logical consistency is a necessary indicator of acceptance, but that it is not sufficient, and that logical inconsistency is sufficient as an indicator of rejection, but it is not necessary. I show how conversants can use information structure and prosody as well as logical reasoning in distinguishing between acceptances and logically consistent rejections, and relate this work to previous work on implicature and default reasoning by introducing three new classes of rejection: {\sc implicature rejections}, {\sc epistemic rejections} and {\sc deliberation rejections}. I show how these rejections are inferred as a result of default inferences, which, by other analyses, would have been blocked by the context. In order to account for these facts, I propose a model of the common ground that allows these default inferences to go through, and show how the model, originally proposed to account for the various forms of acceptance, can also model all types of rejection.<|reference_end|> | arxiv | @article{walker1996inferring,
title={Inferring Acceptance and Rejection in Dialogue by Default Rules of
Inference},
author={Marilyn A. Walker},
journal={Language and Speech, 39-2, 1996},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609002},
primaryClass={cmp-lg cs.CL}
} | walker1996inferring |
arxiv-668911 | cmp-lg/9609003 | Cue Phrase Classification Using Machine Learning | <|reference_start|>Cue Phrase Classification Using Machine Learning: Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. Correctly classifying cue phrases as discourse or sentential is critical in natural language processing systems that exploit discourse structure, e.g., for performing tasks such as anaphora resolution and plan recognition. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification models from sets of pre-classified cue phrases and their features in text and speech. Machine learning is shown to be an effective technique for not only automating the generation of classification models, but also for improving upon previous results. When compared to manually derived classification models already in the literature, the learned models often perform with higher accuracy and contain new linguistic insights into the data. In addition, the ability to automatically construct classification models makes it easier to comparatively analyze the utility of alternative feature representations of the data. Finally, the ease of retraining makes the learning approach more scalable and flexible than manual methods.<|reference_end|> | arxiv | @article{litman1996cue,
title={Cue Phrase Classification Using Machine Learning},
author={Diane J. Litman (AT&T Labs - Research)},
journal={Journal of Artificial Intelligence Research 5 (1996) 53-94},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609003},
primaryClass={cmp-lg cs.CL}
} | litman1996cue |
arxiv-668912 | cmp-lg/9609004 | A Principled Framework for Constructing Natural Language Interfaces To Temporal Databases | <|reference_start|>A Principled Framework for Constructing Natural Language Interfaces To Temporal Databases: Most existing natural language interfaces to databases (NLIDBs) were designed to be used with ``snapshot'' database systems, that provide very limited facilities for manipulating time-dependent data. Consequently, most NLIDBs also provide very limited support for the notion of time. The database community is becoming increasingly interested in _temporal_ database systems. These are intended to store and manipulate in a principled manner information not only about the present, but also about the past and future. This thesis develops a principled framework for constructing English NLIDBs for _temporal_ databases (NLITDBs), drawing on research in tense and aspect theories, temporal logics, and temporal databases. I first explore temporal linguistic phenomena that are likely to appear in English questions to NLITDBs. Drawing on existing linguistic theories of time, I formulate an account for a large number of these phenomena that is simple enough to be embodied in practical NLITDBs. Exploiting ideas from temporal logics, I then define a temporal meaning representation language, TOP, and I show how the HPSG grammar theory can be modified to incorporate the tense and aspect account of this thesis, and to map a wide range of English questions involving time to appropriate TOP expressions. Finally, I present and prove the correctness of a method to translate from TOP to TSQL2, TSQL2 being a temporal extension of the SQL-92 database language. This way, I establish a sound route from English questions involving time to a general-purpose temporal database language, that can act as a principled framework for building NLITDBs. To demonstrate that this framework is workable, I employ it to develop a prototype NLITDB, implemented using ALE and Prolog.<|reference_end|> | arxiv | @article{androutsopoulos1996a,
title={A Principled Framework for Constructing Natural Language Interfaces To
Temporal Databases},
author={Ion Androutsopoulos (University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9609004},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609004},
primaryClass={cmp-lg cs.CL}
} | androutsopoulos1996a |
arxiv-668913 | cmp-lg/9609005 | Centering in Japanese Discourse | <|reference_start|>Centering in Japanese Discourse: In this paper we propose a computational treatment of the resolution of zero pronouns in Japanese discourse, using an adaptation of the centering algorithm. We are able to factor language-specific dependencies into one parameter of the centering algorithm. Previous analyses have stipulated that a zero pronoun and its cospecifier must share a grammatical function property such as {\sc Subject} or {\sc NonSubject}. We show that this property-sharing stipulation is unneeded. In addition we propose the notion of {\sc topic ambiguity} within the centering framework, which predicts some ambiguities that occur in Japanese discourse. This analysis has implications for the design of language-independent discourse modules for Natural Language systems. The centering algorithm has been implemented in an HPSG Natural Language system with both English and Japanese grammars.<|reference_end|> | arxiv | @article{walker1996centering,
title={Centering in Japanese Discourse},
author={Marilyn Walker, Masayo Iida and Sharon Cote},
journal={COLING90: Proceedings 13th International Conference on
Computational Linguistics, Helsinki},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609005},
primaryClass={cmp-lg cs.CL}
} | walker1996centering |
arxiv-668914 | cmp-lg/9609006 | Japanese Discourse and the Process of Centering | <|reference_start|>Japanese Discourse and the Process of Centering: This paper has three aims: (1) to generalize a computational account of the discourse process called {\sc centering}, (2) to apply this account to discourse processing in Japanese so that it can be used in computational systems for machine translation or language understanding, and (3) to provide some insights on the effect of syntactic factors in Japanese on discourse interpretation. We argue that while discourse interpretation is an inferential process, syntactic cues constrain this process, and demonstrate this argument with respect to the interpretation of {\sc zeros}, unexpressed arguments of the verb, in Japanese. The syntactic cues in Japanese discourse that we investigate are the morphological markers for grammatical {\sc topic}, the postposition {\it wa}, as well as those for grammatical functions such as {\sc subject}, {\em ga}, {\sc object}, {\em o} and {\sc object2}, {\em ni}. In addition, we investigate the role of speaker's {\sc empathy}, which is the viewpoint from which an event is described. This is syntactically indicated through the use of verbal compounding, i.e. the auxiliary use of verbs such as {\it kureta, kita}. Our results are based on a survey of native speakers of their interpretation of short discourses, consisting of minimal pairs, varied by one of the above factors. We demonstrate that these syntactic cues do indeed affect the interpretation of {\sc zeros}, but that having previously been the {\sc topic} and being realized as a {\sc zero} also contributes to the salience of a discourse entity. We propose a discourse rule of {\sc zero topic assignment}, and show that {\sc centering} provides constraints on when a {\sc zero} can be interpreted as the {\sc zero topic}.<|reference_end|> | arxiv | @article{walker1996japanese,
title={Japanese Discourse and the Process of Centering},
author={Marilyn Walker, Masayo Iida and Sharon Cote},
journal={Computational Linguistics 20-2, 1994},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609006},
primaryClass={cmp-lg cs.CL}
} | walker1996japanese |
arxiv-668915 | cmp-lg/9609007 | Discourse Coherence and Shifting Centers in Japanese Texts | <|reference_start|>Discourse Coherence and Shifting Centers in Japanese Texts: In languages such as Japanese, the use of {\it zeros}, unexpressed arguments of the verb, in utterances that shift the topic involves a risk that the meaning intended by the speaker may not be transparent to the hearer. However, this potentially undesirable conversational strategy often occurs in the course of naturally-occurring discourse. In this chapter, I report on an empirical study of 250 utterances with {\it zeros} in 20 Japanese newspaper articles. Each utterance is analyzed in terms of centering transitions and the form in which centers are realized by referring expressions. I also examine lexical subcategorization information, and tense and aspect in order to test the hypothesis that the speaker expects the hearer to use this information in determining global discourse structure. I explain the occurrence of {\it zeros} in {\sc retain} and {\sc rough-shift} centering transitions, by claiming that a {\it zero} can only be used in these cases when the shift of centers is supported by contextual information such as lexical semantics, tense and aspect, and agreement features. I then propose an algorithm by which centering can incorporate these observations to integrate centering with global discourse structure, and thus enhance its ability for non-local pronoun resolution.<|reference_end|> | arxiv | @article{iida1996discourse,
title={Discourse Coherence and Shifting Centers in Japanese Texts},
author={Masayo Iida (Fujitsu Software Corporation)},
journal={Centering in Discourse, Oxford University Press; Eds. Walker,
Joshi and Prince, In Press},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609007},
primaryClass={cmp-lg cs.CL}
} | iida1996discourse |
arxiv-668916 | cmp-lg/9609008 | Designing Statistical Language Learners: Experiments on Noun Compounds | <|reference_start|>Designing Statistical Language Learners: Experiments on Noun Compounds: The goal of this thesis is to advance the exploration of the statistical language learning design space. In pursuit of that goal, the thesis makes two main theoretical contributions: (i) it identifies a new class of designs by specifying an architecture for natural language analysis in which probabilities are given to semantic forms rather than to more superficial linguistic elements; and (ii) it explores the development of a mathematical theory to predict the expected accuracy of statistical language learning systems in terms of the volume of data used to train them. The theoretical work is illustrated by applying statistical language learning designs to the analysis of noun compounds. Both syntactic and semantic analysis of noun compounds are attempted using the proposed architecture. Empirical comparisons demonstrate that the proposed syntactic model is significantly better than those previously suggested, approaching the performance of human judges on the same task, and that the proposed semantic model, the first statistical approach to this problem, exhibits significantly better accuracy than the baseline strategy. These results suggest that the new class of designs identified is a promising one. The experiments also serve to highlight the need for a widely applicable theory of data requirements.<|reference_end|> | arxiv | @article{lauer1996designing,
title={Designing Statistical Language Learners: Experiments on Noun Compounds},
author={Mark Lauer (Microsoft Research Institute, Sydney)},
journal={arXiv preprint arXiv:cmp-lg/9609008},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9609008},
primaryClass={cmp-lg cs.CL}
} | lauer1996designing |
arxiv-668917 | cmp-lg/9609009 | A Geometric Approach to Mapping Bitext Correspondence | <|reference_start|>A Geometric Approach to Mapping Bitext Correspondence: The first step in most corpus-based multilingual NLP work is to construct a detailed map of the correspondence between a text and its translation. Several automatic methods for this task have been proposed in recent years. Yet even the best of these methods can err by several typeset pages. The Smooth Injective Map Recognizer (SIMR) is a new bitext mapping algorithm. SIMR's errors are smaller than those of the previous front-runner by more than a factor of 4. Its robustness has enabled new commercial-quality applications. The greedy nature of the algorithm makes it independent of memory resources. Unlike other bitext mapping algorithms, SIMR allows crossing correspondences to account for word order differences. Its output can be converted quickly and easily into a sentence alignment. SIMR's output has been used to align over 200 megabytes of the Canadian Hansards for publication by the Linguistic Data Consortium.<|reference_end|> | arxiv | @article{melamed1996a,
title={A Geometric Approach to Mapping Bitext Correspondence},
author={I. Dan Melamed (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9609009},
year={1996},
number={IRCS 96-22},
archivePrefix={arXiv},
eprint={cmp-lg/9609009},
primaryClass={cmp-lg cs.CL}
} | melamed1996a |
arxiv-668918 | cmp-lg/9609010 | Automatic Detection of Omissions in Translations | <|reference_start|>Automatic Detection of Omissions in Translations: ADOMIT is an algorithm for Automatic Detection of OMIssions in Translations. The algorithm relies solely on geometric analysis of bitext maps and uses no linguistic information. This property allows it to deal equally well with omissions that do not correspond to linguistic units, such as might result from word-processing mishaps. ADOMIT has proven itself by discovering many errors in a hand-constructed gold standard for evaluating bitext mapping algorithms. Quantitative evaluation on simulated omissions showed that, even with today's poor bitext mapping technology, ADOMIT is a valuable quality control tool for translators and translation bureaus.<|reference_end|> | arxiv | @article{melamed1996automatic,
title={Automatic Detection of Omissions in Translations},
author={I. Dan Melamed (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9609010},
year={1996},
number={IRCS 96-23},
archivePrefix={arXiv},
eprint={cmp-lg/9609010},
primaryClass={cmp-lg cs.CL}
} | melamed1996automatic |
arxiv-668919 | cmp-lg/9610001 | Death and Lightness: Using a Demographic Model to Find Support Verbs | <|reference_start|>Death and Lightness: Using a Demographic Model to Find Support Verbs: Some verbs have a particular kind of binary ambiguity: they can carry their normal, full meaning, or they can be merely acting as a prop for the nominal object. It has been suggested that there is a detectable pattern in the relationship between a verb acting as a prop (a \term{support verb}) and the noun it supports. The task this paper undertakes is to develop a model which identifies the support verb for a particular noun, and by extension, when nouns are enumerated, a model which disambiguates a verb with respect to its support status. The paper sets up a basic model as a standard for comparison; it then proposes a more complex model, and gives some results to support the model's validity, comparing it with other similar approaches.<|reference_end|> | arxiv | @article{dras1996death,
title={Death and Lightness: Using a Demographic Model to Find Support Verbs},
author={Mark Dras and Mike Johnson (Microsoft Institute and Dept of Computing,
Macquarie University)},
journal={CSNLP-96, Sept. 2-4, Dublin, Ireland},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9610001},
primaryClass={cmp-lg cs.CL}
} | dras1996death |
arxiv-668920 | cmp-lg/9610002 | Gathering Statistics to Aspectually Classify Sentences with a Genetic Algorithm | <|reference_start|>Gathering Statistics to Aspectually Classify Sentences with a Genetic Algorithm: This paper presents a method for large corpus analysis to semantically classify an entire clause. In particular, we use cooccurrence statistics among similar clauses to determine the aspectual class of an input clause. The process examines linguistic features of clauses that are relevant to aspectual classification. A genetic algorithm determines what combinations of linguistic features to use for this task.<|reference_end|> | arxiv | @article{siegel1996gathering,
title={Gathering Statistics to Aspectually Classify Sentences with a Genetic
Algorithm},
author={Eric V. Siegel and Kathleen R. McKeown (Columbia University)},
journal={arXiv preprint arXiv:cmp-lg/9610002},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9610002},
primaryClass={cmp-lg cs.CL}
} | siegel1996gathering |
arxiv-668921 | cmp-lg/9610003 | Stochastic Attribute-Value Grammars | <|reference_start|>Stochastic Attribute-Value Grammars: Probabilistic analogues of regular and context-free grammars are well-known in computational linguistics, and currently the subject of intensive research. To date, however, no satisfactory probabilistic analogue of attribute-value grammars has been proposed: previous attempts have failed to define a correct parameter-estimation algorithm. In the present paper, I define stochastic attribute-value grammars and give a correct algorithm for estimating their parameters. The estimation algorithm is adapted from Della Pietra, Della Pietra, and Lafferty (1995). To estimate model parameters, it is necessary to compute the expectations of certain functions under random fields. In the application discussed by Della Pietra, Della Pietra, and Lafferty (representing English orthographic constraints), Gibbs sampling can be used to estimate the needed expectations. The fact that attribute-value grammars generate constrained languages makes Gibbs sampling inapplicable, but I show how a variant of Gibbs sampling, the Metropolis-Hastings algorithm, can be used instead.<|reference_end|> | arxiv | @article{abney1996stochastic,
title={Stochastic Attribute-Value Grammars},
author={Steven Abney (University of Tuebingen)},
journal={arXiv preprint arXiv:cmp-lg/9610003},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9610003},
primaryClass={cmp-lg cs.CL}
} | abney1996stochastic |
arxiv-668922 | cmp-lg/9610004 | A Faster Structured-Tag Word-Classification Method | <|reference_start|>A Faster Structured-Tag Word-Classification Method: Several methods have been proposed for processing a corpus to induce a tagset for the sub-language represented by the corpus. This paper examines a structured-tag word classification method introduced by McMahon (1994) and discussed further by McMahon & Smith (1995) in cmp-lg/9503011 . Two major variations, (1) non-random initial assignment of words to classes and (2) moving multiple words in parallel, together provide robust non-random results with a speed increase of 200% to 450%, at the cost of slightly lower quality than McMahon's method's average quality. Two further variations, (3) retaining information from less- frequent words and (4) avoiding reclustering closed classes, are proposed for further study. Note: The speed increases quoted above are relative to my implementation of my understanding of McMahon's algorithm; this takes time measured in hours and days on a home PC. A revised version of the McMahon & Smith (1995) paper has appeared (June 1996) in Computational Linguistics 22(2):217- 247; this refers to a time of "several weeks" to cluster 569 words on a Sparc-IPC.<|reference_end|> | arxiv | @article{zhang1996a,
title={A Faster Structured-Tag Word-Classification Method},
author={Min Zhang (University of Melbourne)},
journal={27-Aug-96 PRICAI-96 Workshop on Future Issues for Multi-lingual
Text Processing, Cairns, Australia. ISBN 0 86857 730 8},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9610004},
primaryClass={cmp-lg cs.CL}
} | zhang1996a |
arxiv-668923 | cmp-lg/9610005 | Learning string edit distance | <|reference_start|>Learning string edit distance: In many applications, it is necessary to determine the similarity of two strings. A widely-used notion of string similarity is the edit distance: the minimum number of insertions, deletions, and substitutions required to transform one string into the other. In this report, we provide a stochastic model for string edit distance. Our stochastic model allows us to learn a string edit distance function from a corpus of examples. We illustrate the utility of our approach by applying it to the difficult problem of learning the pronunciation of words in conversational speech. In this application, we learn a string edit distance with one fourth the error rate of the untrained Levenshtein distance. Our approach is applicable to any string classification problem that may be solved using a similarity function against a database of labeled prototypes. Keywords: string edit distance, Levenshtein distance, stochastic transduction, syntactic pattern recognition, prototype dictionary, spelling correction, string correction, string similarity, string classification, speech recognition, pronunciation modeling, Switchboard corpus.<|reference_end|> | arxiv | @article{ristad1996learning,
title={Learning string edit distance},
author={Eric Sven Ristad and Peter N. Yianilos},
journal={arXiv preprint arXiv:cmp-lg/9610005},
year={1996},
number={CS-TR-532-96},
archivePrefix={arXiv},
eprint={cmp-lg/9610005},
primaryClass={cmp-lg cs.CL}
} | ristad1996learning |
arxiv-668924 | cmp-lg/9610006 | A Morphology-System and Part-of-Speech Tagger for German | <|reference_start|>A Morphology-System and Part-of-Speech Tagger for German: This paper presents an integrated tool for German morphology and statistical part-of-speech tagging which aims at making some well established methods widely available. The software is very user friendly, runs on any PC and can be downloaded as a complete package (including lexicon and documentation) from the World Wide Web. Compared with the performance of other tagging systems the tagger produces similar results.<|reference_end|> | arxiv | @article{lezius1996a,
title={A Morphology-System and Part-of-Speech Tagger for German},
author={Wolfgang Lezius, Reinhard Rapp and Manfred Wettler (University of
Paderborn)},
journal={In:D.Gibbon,ed.,Natural Language Processing and Speech Technology.
Results of the 3rd KONVENS Conference. Mouton de Gruyter, Berlin, 1996.},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9610006},
primaryClass={cmp-lg cs.CL}
} | lezius1996a |
arxiv-668925 | cmp-lg/9611001 | OT SIMPLE - a construction-kit approach to Optimality Theory implementation | <|reference_start|>OT SIMPLE - a construction-kit approach to Optimality Theory implementation: This paper details a simple approach to the implementation of Optimality Theory (OT, Prince and Smolensky 1993) on a computer, in part reusing standard system software. In a nutshell, OT's GENerating source is implemented as a BinProlog program interpreting a context-free specification of a GEN structural grammar according to a user-supplied input form. The resulting set of textually flattened candidate tree representations is passed to the CONstraint stage. Constraints are implemented by finite-state transducers specified as `sed' stream editor scripts that typically map ill-formed portions of the candidate to violation marks. EVALuation of candidates reduces to simple sorting: the violation-mark-annotated output leaving CON is fed into `sort', which orders candidates on the basis of the violation vector column of each line, thereby bringing the optimal candidate to the top. This approach gave rise to OT SIMPLE, the first freely available software tool for the OT framework to provide generic facilities for both GEN and CONstraint definition. Its practical applicability is demonstrated by modelling the OT analysis of apparent subtractive pluralization in Upper Hessian presented in Golston and Wiese (1996).<|reference_end|> | arxiv | @article{walther1996ot,
title={OT SIMPLE - a construction-kit approach to Optimality Theory
implementation},
author={Markus Walther (University of Duesseldorf, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9611001},
year={1996},
number={Arbeiten des Sonderforschungsbereichs 282, Nr. 88},
archivePrefix={arXiv},
eprint={cmp-lg/9611001},
primaryClass={cmp-lg cs.CL}
} | walther1996ot |
arxiv-668926 | cmp-lg/9611002 | Unsupervised Language Acquisition | <|reference_start|>Unsupervised Language Acquisition: This thesis presents a computational theory of unsupervised language acquisition, precisely defining procedures for learning language from ordinary spoken or written utterances, with no explicit help from a teacher. The theory is based heavily on concepts borrowed from machine learning and statistical estimation. In particular, learning takes place by fitting a stochastic, generative model of language to the evidence. Much of the thesis is devoted to explaining conditions that must hold for this general learning strategy to arrive at linguistically desirable grammars. The thesis introduces a variety of technical innovations, among them a common representation for evidence and grammars, and a learning strategy that separates the ``content'' of linguistic parameters from their representation. Algorithms based on it suffer from few of the search problems that have plagued other computational approaches to language acquisition. The theory has been tested on problems of learning vocabularies and grammars from unsegmented text and continuous speech, and mappings between sound and representations of meaning. It performs extremely well on various objective criteria, acquiring knowledge that causes it to assign almost exactly the same structure to utterances as humans do. This work has application to data compression, language modeling, speech recognition, machine translation, information retrieval, and other tasks that rely on either structural or stochastic descriptions of language.<|reference_end|> | arxiv | @article{de marcken1996unsupervised,
title={Unsupervised Language Acquisition},
author={Carl de Marcken (MIT)},
journal={arXiv preprint arXiv:cmp-lg/9611002},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9611002},
primaryClass={cmp-lg cs.CL}
} | de marcken1996unsupervised |
arxiv-668927 | cmp-lg/9611003 | Data-Oriented Language Processing An Overview | <|reference_start|>Data-Oriented Language Processing An Overview: During the last few years, a new approach to language processing has started to emerge, which has become known under various labels such as "data-oriented parsing", "corpus-based interpretation", and "tree-bank grammar" (cf. van den Berg et al. 1994; Bod 1992-96; Bod et al. 1996a/b; Bonnema 1996; Charniak 1996a/b; Goodman 1996; Kaplan 1996; Rajman 1995a/b; Scha 1990-92; Sekine & Grishman 1995; Sima'an et al. 1994; Sima'an 1995-96; Tugwell 1995). This approach, which we will call "data-oriented processing" or "DOP", embodies the assumption that human language perception and production works with representations of concrete past language experiences, rather than with abstract linguistic rules. The models that instantiate this approach therefore maintain large corpora of linguistic representations of previously occurring utterances. When processing a new input utterance, analyses of this utterance are constructed by combining fragments from the corpus; the occurrence-frequencies of the fragments are used to estimate which analysis is the most probable one. In this paper we give an in-depth discussion of a data-oriented processing model which employs a corpus of labelled phrase-structure trees. Then we review some other models that instantiate the DOP approach. Many of these models also employ labelled phrase-structure trees, but use different criteria for extracting fragments from the corpus or employ different disambiguation strategies (Bod 1996b; Charniak 1996a/b; Goodman 1996; Rajman 1995a/b; Sekine & Grishman 1995; Sima'an 1995-96); other models use richer formalisms for their corpus annotations (van den Berg et al. 1994; Bod et al., 1996a/b; Bonnema 1996; Kaplan 1996; Tugwell 1995).<|reference_end|> | arxiv | @article{bod1996data-oriented,
title={Data-Oriented Language Processing. An Overview},
author={Rens Bod and Remko Scha (University of Amsterdam)},
journal={arXiv preprint arXiv:cmp-lg/9611003},
year={1996},
number={ILLC Technical Report LP-96-13},
archivePrefix={arXiv},
eprint={cmp-lg/9611003},
primaryClass={cmp-lg cs.CL}
} | bod1996data-oriented |
arxiv-668928 | cmp-lg/9611004 | Nonuniform Markov models | <|reference_start|>Nonuniform Markov models: A statistical language model assigns probability to strings of arbitrary length. Unfortunately, it is not possible to gather reliable statistics on strings of arbitrary length from a finite corpus. Therefore, a statistical language model must decide that each symbol in a string depends on at most a small, finite number of other symbols in the string. In this report we propose a new way to model conditional independence in Markov models. The central feature of our nonuniform Markov model is that it makes predictions of varying lengths using contexts of varying lengths. Experiments on the Wall Street Journal reveal that the nonuniform model performs slightly better than the classic interpolated Markov model. This result is somewhat remarkable because both models contain identical numbers of parameters whose values are estimated in a similar manner. The only difference between the two models is how they combine the statistics of longer and shorter strings. Keywords: nonuniform Markov model, interpolated Markov model, conditional independence, statistical language model, discrete time series.<|reference_end|> | arxiv | @article{ristad1996nonuniform,
title={Nonuniform Markov models},
author={Eric Sven Ristad and Robert G. Thomas},
journal={arXiv preprint arXiv:cmp-lg/9611004},
year={1996},
number={CS-TR-536-96},
archivePrefix={arXiv},
eprint={cmp-lg/9611004},
primaryClass={cmp-lg cs.CL}
} | ristad1996nonuniform |
arxiv-668929 | cmp-lg/9611005 | Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface | <|reference_start|>Integrating HMM-Based Speech Recognition With Direct Manipulation In A Multimodal Korean Natural Language Interface: This paper presents a HMM-based speech recognition engine and its integration into direct manipulation interfaces for Korean document editor. Speech recognition can reduce typical tedious and repetitive actions which are inevitable in standard GUIs (graphic user interfaces). Our system consists of general speech recognition engine called ABrain {Auditory Brain} and speech commandable document editor called SHE {Simple Hearing Editor}. ABrain is a phoneme-based speech recognition engine which shows up to 97% of discrete command recognition rate. SHE is a EuroBridge widget-based document editor that supports speech commands as well as direct manipulation interfaces.<|reference_end|> | arxiv | @article{lee1996integrating,
title={Integrating HMM-Based Speech Recognition With Direct Manipulation In A
Multimodal Korean Natural Language Interface},
author={Geunbae Lee, Jong-Hyeok Lee, Sangeok Kim (Department of Computer
Science and Engineering, Pohang University of Science and Technology)},
journal={arXiv preprint arXiv:cmp-lg/9611005},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9611005},
primaryClass={cmp-lg cs.CL}
} | lee1996integrating |
arxiv-668930 | cmp-lg/9611006 | A Framework for Natural Language Interfaces to Temporal Databases | <|reference_start|>A Framework for Natural Language Interfaces to Temporal Databases: Over the past thirty years, there has been considerable progress in the design of natural language interfaces to databases. Most of this work has concerned snapshot databases, in which there are only limited facilities for manipulating time-varying information. The database community is becoming increasingly interested in temporal databases, databases with special support for time-dependent entries. We have developed a framework for constructing natural language interfaces to temporal databases, drawing on research on temporal phenomena within logic and linguistics. The central part of our framework is a logic-like formal language, called TOP, which can capture the semantics of a wide range of English sentences. We have implemented an HPSG-based sentence analyser that converts a large set of English queries involving time into TOP formulae, and have formulated a provably correct procedure for translating TOP expressions into queries in the TSQL2 temporal database language. In this way we have established a sound route from English to a general-purpose temporal database language.<|reference_end|> | arxiv | @article{androutsopoulos1996a,
title={A Framework for Natural Language Interfaces to Temporal Databases},
author={I. Androutsopoulos (Microsoft Research Institute, Macquarie
University, Sydney), G.D. Ritchie (Dept. of Artificial Intelligence,
University of Edinburgh), P. Thanisch (Dept. of Computer Science, University
of Edinburgh)},
journal={Proc. of the 20th Australasian Computer Science Conference,
Sydney, 1997, pp. 307-315.},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9611006},
primaryClass={cmp-lg cs.CL}
} | androutsopoulos1996a |
arxiv-668931 | cmp-lg/9612001 | Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning | <|reference_start|>Comparative Experiments on Disambiguating Word Senses: An Illustration of the Role of Bias in Machine Learning: This paper describes an experimental comparison of seven different learning algorithms on the problem of learning to disambiguate the meaning of a word from context. The algorithms tested include statistical, neural-network, decision-tree, rule-based, and case-based classification techniques. The specific problem tested involves disambiguating six senses of the word ``line'' using the words in the current and proceeding sentence as context. The statistical and neural-network methods perform the best on this particular problem and we discuss a potential reason for this observed difference. We also discuss the role of bias in machine learning and its importance in explaining performance differences observed on specific problems.<|reference_end|> | arxiv | @article{mooney1996comparative,
title={Comparative Experiments on Disambiguating Word Senses: An Illustration
of the Role of Bias in Machine Learning},
author={Raymond J. Mooney (University of Texas at Austin)},
journal={arXiv preprint arXiv:cmp-lg/9612001},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9612001},
primaryClass={cmp-lg cs.CL}
} | mooney1996comparative |
arxiv-668932 | cmp-lg/9612002 | Specialized Language Models using Dialogue Predictions | <|reference_start|>Specialized Language Models using Dialogue Predictions: This paper analyses language modeling in spoken dialogue systems for accessing a database. The use of several language models obtained by exploiting dialogue predictions gives better results than the use of a single model for the whole dialogue interaction. For this reason several models have been created, each one for a specific system question, such as the request or the confirmation of a parameter. The use of dialogue-dependent language models increases the performance both at the recognition and at the understanding level, especially on answers to system requests. Moreover other methods to increase performance, like automatic clustering of vocabulary words or the use of better acoustic models during recognition, does not affect the improvements given by dialogue-dependent language models. The system used in our experiments is Dialogos, the Italian spoken dialogue system used for accessing railway timetable information over the telephone. The experiments were carried out on a large corpus of dialogues collected using Dialogos.<|reference_end|> | arxiv | @article{popovici1996specialized,
title={Specialized Language Models using Dialogue Predictions},
author={Cosmin Popovici (ICI - Bucuresti, Romania), Paolo Baggia (CSELT -
Turin, Italy)},
journal={arXiv preprint arXiv:cmp-lg/9612002},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9612002},
primaryClass={cmp-lg cs.CL}
} | popovici1996specialized |
arxiv-668933 | cmp-lg/9612003 | Metrics for Evaluating Dialogue Strategies in a Spoken Language System | <|reference_start|>Metrics for Evaluating Dialogue Strategies in a Spoken Language System: In this paper, we describe a set of metrics for the evaluation of different dialogue management strategies in an implemented real-time spoken language system. The set of metrics we propose offers useful insights in evaluating how particular choices in the dialogue management can affect the overall quality of the man-machine dialogue. The evaluation makes use of established metrics: the transaction success, the contextual appropriateness of system answers, the calculation of normal and correction turns in a dialogue. We also define a new metric, the implicit recovery, which allows to measure the ability of a dialogue manager to deal with errors by different levels of analysis. We report evaluation data from several experiments, and we compare two different approaches to dialogue repair strategies using the set of metrics we argue for.<|reference_end|> | arxiv | @article{danieli1996metrics,
title={Metrics for Evaluating Dialogue Strategies in a Spoken Language System},
author={Morena Danieli, Elisabetta Gerbino (CSELT - Torino, Italy)},
journal={arXiv preprint arXiv:cmp-lg/9612003},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9612003},
primaryClass={cmp-lg cs.CL}
} | danieli1996metrics |
arxiv-668934 | cmp-lg/9612004 | Dialogos: a Robust System for Human-Machine Spoken Dialogue on the Telephone | <|reference_start|>Dialogos: a Robust System for Human-Machine Spoken Dialogue on the Telephone: This paper presents Dialogos, a real-time system for human-machine spoken dialogue on the telephone in task-oriented domains. The system has been tested in a large trial with inexperienced users and it has proved robust enough to allow spontaneous interactions both to users which get good recognition performance and to the ones which get lower scores. The robust behavior of the system has been achieved by combining the use of specific language models during the recognition phase of analysis, the tolerance toward spontaneous speech phenomena, the activity of a robust parser, and the use of pragmatic-based dialogue knowledge. This integration of the different modules allows to deal with partial or total breakdowns of the different levels of analysis. We report the field trial data of the system and the evaluation results of the overall system and of the submodules.<|reference_end|> | arxiv | @article{albesano1996dialogos:,
title={Dialogos: a Robust System for Human-Machine Spoken Dialogue on the
Telephone},
author={Dario Albesano, Paolo Baggia, Morena Danieli, Roberto Gemello,
Elisabetta Gerbino, and Claudio Rullent (CSELT - Turin, Italy)},
journal={arXiv preprint arXiv:cmp-lg/9612004},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9612004},
primaryClass={cmp-lg cs.CL}
} | albesano1996dialogos: |
arxiv-668935 | cmp-lg/9612005 | Maximum Entropy Modeling Toolkit | <|reference_start|>Maximum Entropy Modeling Toolkit: The Maximum Entropy Modeling Toolkit supports parameter estimation and prediction for statistical language models in the maximum entropy framework. The maximum entropy framework provides a constructive method for obtaining the unique conditional distribution p*(y|x) that satisfies a set of linear constraints and maximizes the conditional entropy H(p|f) with respect to the empirical distribution f(x). The maximum entropy distribution p*(y|x) also has a unique parametric representation in the class of exponential models, as m(y|x) = r(y|x)/Z(x) where the numerator m(y|x) = prod_i alpha_i^g_i(x,y) is a product of exponential weights, with alpha_i = exp(lambda_i), and the denominator Z(x) = sum_y r(y|x) is required to satisfy the axioms of probability. This manual explains how to build maximum entropy models for discrete domains with the Maximum Entropy Modeling Toolkit (MEMT). First we summarize the steps necessary to implement a language model using the toolkit. Next we discuss the executables provided by the toolkit and explain the file formats required by the toolkit. Finally, we review the maximum entropy framework and apply it to the problem of statistical language modeling. Keywords: statistical language models, maximum entropy, exponential models, improved iterative scaling, Markov models, triggers.<|reference_end|> | arxiv | @article{ristad1996maximum,
title={Maximum Entropy Modeling Toolkit},
author={Eric Sven Ristad},
journal={arXiv preprint arXiv:cmp-lg/9612005},
year={1996},
archivePrefix={arXiv},
eprint={cmp-lg/9612005},
primaryClass={cmp-lg cs.CL}
} | ristad1996maximum |
arxiv-668936 | cmp-lg/9701001 | Exploiting Context to Identify Lexical Atoms -- A Statistical View of Linguistic Context | <|reference_start|>Exploiting Context to Identify Lexical Atoms -- A Statistical View of Linguistic Context: Interpretation of natural language is inherently context-sensitive. Most words in natural language are ambiguous and their meanings are heavily dependent on the linguistic context in which they are used. The study of lexical semantics can not be separated from the notion of context. This paper takes a contextual approach to lexical semantics and studies the linguistic context of lexical atoms, or "sticky" phrases such as "hot dog". Since such lexical atoms may occur frequently in unrestricted natural language text, recognizing them is crucial for understanding naturally-occurring text. The paper proposes several heuristic approaches to exploiting the linguistic context to identify lexical atoms from arbitrary natural language text.<|reference_end|> | arxiv | @article{zhai1997exploiting,
title={Exploiting Context to Identify Lexical Atoms -- A Statistical View of
Linguistic Context},
author={Chengxiang Zhai (Carnegie Mellon University)},
journal={Proceedings of the International and Interdisciplinary Conference
on Modelling and Using Context (CONTEXT-97), Rio de Janeiro, Brzil, Feb. 4-6,
1997. 119-129.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9701001},
primaryClass={cmp-lg cs.CL}
} | zhai1997exploiting |
arxiv-668937 | cmp-lg/9701002 | Hybrid language processing in the Spoken Language Translator | <|reference_start|>Hybrid language processing in the Spoken Language Translator: The paper presents an overview of the Spoken Language Translator (SLT) system's hybrid language-processing architecture, focussing on the way in which rule-based and statistical methods are combined to achieve robust and efficient performance within a linguistically motivated framework. In general, we argue that rules are desirable in order to encode domain-independent linguistic constraints and achieve high-quality grammatical output, while corpus-derived statistics are needed if systems are to be efficient and robust; further, that hybrid architectures are superior from the point of view of portability to architectures which only make use of one type of information. We address the topics of ``multi-engine'' strategies for robust translation; robust bottom-up parsing using pruning and grammar specialization; rational development of linguistic rule-sets using balanced domain corpora; and efficient supervised training by interactive disambiguation. All work described is fully implemented in the current version of the SLT-2 system.<|reference_end|> | arxiv | @article{rayner1997hybrid,
title={Hybrid language processing in the Spoken Language Translator},
author={Manny Rayner and David Carter (SRI International, Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9701002},
year={1997},
number={CRC-064},
archivePrefix={arXiv},
eprint={cmp-lg/9701002},
primaryClass={cmp-lg cs.CL}
} | rayner1997hybrid |
arxiv-668938 | cmp-lg/9701003 | Generating Information-Sharing Subdialogues in Expert-User Consultation | <|reference_start|>Generating Information-Sharing Subdialogues in Expert-User Consultation: In expert-consultation dialogues, it is inevitable that an agent will at times have insufficient information to determine whether to accept or reject a proposal by the other agent. This results in the need for the agent to initiate an information-sharing subdialogue to form a set of shared beliefs within which the agents can effectively re-evaluate the proposal. This paper presents a computational strategy for initiating such information-sharing subdialogues to resolve the system's uncertainty regarding the acceptance of a user proposal. Our model determines when information-sharing should be pursued, selects a focus of information-sharing among multiple uncertain beliefs, chooses the most effective information-sharing strategy, and utilizes the newly obtained information to re-evaluate the user proposal. Furthermore, our model is capable of handling embedded information-sharing subdialogues.<|reference_end|> | arxiv | @article{chu-carroll1997generating,
title={Generating Information-Sharing Subdialogues in Expert-User Consultation},
author={Jennifer Chu-Carroll (Bell Laboratories) and Sandra Carberry
(University of Delaware)},
journal={IJCAI'95},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9701003},
primaryClass={cmp-lg cs.CL}
} | chu-carroll1997generating |
arxiv-668939 | cmp-lg/9701004 | An Efficient Implementation of the Head-Corner Parser | <|reference_start|>An Efficient Implementation of the Head-Corner Parser: This paper describes an efficient and robust implementation of a bi-directional, head-driven parser for constraint-based grammars. This parser is developed for the OVIS system: a Dutch spoken dialogue system in which information about public transport can be obtained by telephone. After a review of the motivation for head-driven parsing strategies, and head-corner parsing in particular, a non-deterministic version of the head-corner parser is presented. A memoization technique is applied to obtain a fast parser. A goal-weakening technique is introduced which greatly improves average case efficiency, both in terms of speed and space requirements. I argue in favor of such a memoization strategy with goal-weakening in comparison with ordinary chart-parsers because such a strategy can be applied selectively and therefore enormously reduces the space requirements of the parser, while no practical loss in time-efficiency is observed. On the contrary, experiments are described in which head-corner and left-corner parsers implemented with selective memoization and goal weakening outperform `standard' chart parsers. The experiments include the grammar of the OVIS system and the Alvey NL Tools grammar. Head-corner parsing is a mix of bottom-up and top-down processing. Certain approaches towards robust parsing require purely bottom-up processing. Therefore, it seems that head-corner parsing is unsuitable for such robust parsing techniques. However, it is shown how underspecification (which arises very naturally in a logic programming environment) can be used in the head-corner parser to allow such robust parsing techniques. A particular robust parsing model is described which is implemented in OVIS.<|reference_end|> | arxiv | @article{van noord1997an,
title={An Efficient Implementation of the Head-Corner Parser},
author={Gertjan van Noord (Alfa-informatica, BCN, University of Groningen)},
journal={arXiv preprint arXiv:cmp-lg/9701004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9701004},
primaryClass={cmp-lg cs.CL}
} | van noord1997an |
arxiv-668940 | cmp-lg/9702001 | SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks | <|reference_start|>SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis Using Artificial Neural Networks: In this paper, we describe a so-called screening approach for learning robust processing of spontaneously spoken language. A screening approach is a flat analysis which uses shallow sequences of category representations for analyzing an utterance at various syntactic, semantic and dialog levels. Rather than using a deeply structured symbolic analysis, we use a flat connectionist analysis. This screening approach aims at supporting speech and language processing by using (1) data-driven learning and (2) robustness of connectionist networks. In order to test this approach, we have developed the SCREEN system which is based on this new robust, learned and flat analysis. In this paper, we focus on a detailed description of SCREEN's architecture, the flat syntactic and semantic analysis, the interaction with a speech recognizer, and a detailed evaluation analysis of the robustness under the influence of noisy or incomplete input. The main result of this paper is that flat representations allow more robust processing of spontaneous spoken language than deeply structured representations. In particular, we show how the fault-tolerance and learning capability of connectionist networks can support a flat analysis for providing more robust spoken-language processing within an overall hybrid symbolic/connectionist framework.<|reference_end|> | arxiv | @article{wermter1997screen:,
title={SCREEN: Learning a Flat Syntactic and Semantic Spoken Language Analysis
Using Artificial Neural Networks},
author={Stefan Wermter, Volker Weber (University of Hamburg)},
journal={arXiv preprint arXiv:cmp-lg/9702001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702001},
primaryClass={cmp-lg cs.CL}
} | wermter1997screen: |
arxiv-668941 | cmp-lg/9702002 | Automatic Extraction of Subcategorization from Corpora | <|reference_start|>Automatic Extraction of Subcategorization from Corpora: We describe a novel technique and implemented system for constructing a subcategorization dictionary from textual corpora. Each dictionary entry encodes the relative frequency of occurrence of a comprehensive set of subcategorization classes for English. An initial experiment, on a sample of 14 verbs which exhibit multiple complementation patterns, demonstrates that the technique achieves accuracy comparable to previous approaches, which are all limited to a highly restricted set of subcategorization classes. We also demonstrate that a subcategorization dictionary built with the system improves the accuracy of a parser by an appreciable amount.<|reference_end|> | arxiv | @article{briscoe1997automatic,
title={Automatic Extraction of Subcategorization from Corpora},
author={Ted Briscoe (Cambridge University), John Carroll (University of
Sussex)},
journal={arXiv preprint arXiv:cmp-lg/9702002},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702002},
primaryClass={cmp-lg cs.CL}
} | briscoe1997automatic |
arxiv-668942 | cmp-lg/9702003 | A Robust Text Processing Technique Applied to Lexical Error Recovery | <|reference_start|>A Robust Text Processing Technique Applied to Lexical Error Recovery: This thesis addresses automatic lexical error recovery and tokenization of corrupt text input. We propose a technique that can automatically correct misspellings, segmentation errors and real-word errors in a unified framework that uses both a model of language production and a model of the typing behavior, and which makes tokenization part of the recovery process. The typing process is modeled as a noisy channel where Hidden Markov Models are used to model the channel characteristics. Weak statistical language models are used to predict what sentences are likely to be transmitted through the channel. These components are held together in the Token Passing framework which provides the desired tight coupling between orthographic pattern matching and linguistic expectation. The system, CTR (Connected Text Recognition), has been tested on two corpora derived from two different applications, a natural language dialogue system and a transcription typing scenario. Experiments show that CTR can automatically correct a considerable portion of the errors in the test sets without introducing too much noise. The segmentation error correction rate is virtually faultless.<|reference_end|> | arxiv | @article{ingels1997a,
title={A Robust Text Processing Technique Applied to Lexical Error Recovery},
author={Peter Ingels (Linkoping University, Sweden)},
journal={arXiv preprint arXiv:cmp-lg/9702003},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702003},
primaryClass={cmp-lg cs.CL}
} | ingels1997a |
arxiv-668943 | cmp-lg/9702004 | An Annotation Scheme for Free Word Order Languages | <|reference_start|>An Annotation Scheme for Free Word Order Languages: We describe an annotation scheme and a tool developed for creating linguistically annotated corpora for non-configurational languages. Since the requirements for such a formalism differ from those posited for configurational languages, several features have been added, influencing the architecture of the scheme. The resulting scheme reflects a stratificational notion of language, and makes only minimal assumptions about the interrelation of the particular representational strata.<|reference_end|> | arxiv | @article{skut1997an,
title={An Annotation Scheme for Free Word Order Languages},
author={Wojciech Skut, Brigitte Krenn, Thorsten Brants, Hans Uszkoreit
(University of the Saarland, Saarbr"ucken, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9702004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702004},
primaryClass={cmp-lg cs.CL}
} | skut1997an |
arxiv-668944 | cmp-lg/9702005 | Software Infrastructure for Natural Language Processing | <|reference_start|>Software Infrastructure for Natural Language Processing: We classify and review current approaches to software infrastructure for research, development and delivery of NLP systems. The task is motivated by a discussion of current trends in the field of NLP and Language Engineering. We describe a system called GATE (a General Architecture for Text Engineering) that provides a software infrastructure on top of which heterogeneous NLP processing modules may be evaluated and refined individually, or may be combined into larger application systems. GATE aims to support both researchers and developers working on component technologies (e.g. parsing, tagging, morphological analysis) and those working on developing end-user applications (e.g. information extraction, text summarisation, document generation, machine translation, and second language learning). GATE promotes reuse of component technology, permits specialisation and collaboration in large-scale projects, and allows for the comparison and evaluation of alternative technologies. The first release of GATE is now available - see http://www.dcs.shef.ac.uk/research/groups/nlp/gate/<|reference_end|> | arxiv | @article{cunningham1997software,
title={Software Infrastructure for Natural Language Processing},
author={Hamish Cunningham, Kevin Humphreys, Robert Gaizauskas, Yorick Wilks},
journal={5th Conference on Applied Natural Language Processing, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702005},
primaryClass={cmp-lg cs.CL}
} | cunningham1997software |
arxiv-668945 | cmp-lg/9702006 | Information Extraction - A User Guide | <|reference_start|>Information Extraction - A User Guide: This technical memo describes Information Extraction from the point-of-view of a potential user of the technology. No knowledge of language processing is assumed. Information Extraction is a process which takes unseen texts as input and produces fixed-format, unambiguous data as output. This data may be used directly for display to users, or may be stored in a database or spreadsheet for later analysis, or may be used for indexing purposes in Information Retrieval applications. See also http://www.dcs.shef.ac.uk/~hamish<|reference_end|> | arxiv | @article{cunningham1997information,
title={Information Extraction - A User Guide},
author={Hamish Cunningham},
journal={arXiv preprint arXiv:cmp-lg/9702006},
year={1997},
number={CS-97-02},
archivePrefix={arXiv},
eprint={cmp-lg/9702006},
primaryClass={cmp-lg cs.CL}
} | cunningham1997information |
arxiv-668946 | cmp-lg/9702007 | Natural Language Dialogue Service for Appointment Scheduling Agents | <|reference_start|>Natural Language Dialogue Service for Appointment Scheduling Agents: Appointment scheduling is a problem faced daily by many individuals and organizations. Cooperating agent systems have been developed to partially automate this task. In order to extend the circle of participants as far as possible we advocate the use of natural language transmitted by e-mail. We describe COSMA, a fully implemented German language server for existing appointment scheduling agent systems. COSMA can cope with multiple dialogues in parallel, and accounts for differences in dialogue behaviour between human and machine agents. NL coverage of the sublanguage is achieved through both corpus-based grammar development and the use of message extraction techniques.<|reference_end|> | arxiv | @article{busemann1997natural,
title={Natural Language Dialogue Service for Appointment Scheduling Agents},
author={Stephan Busemann, Thierry Declerck, Abdel Kader Diagne, Luca Dini,
Judith Klein, Sven Schmeier (DFKI GmbH)},
journal={Proc. 5th Conference on Applied Natural Language Processing, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702007},
primaryClass={cmp-lg cs.CL}
} | busemann1997natural |
arxiv-668947 | cmp-lg/9702008 | Sequential Model Selection for Word Sense Disambiguation | <|reference_start|>Sequential Model Selection for Word Sense Disambiguation: Statistical models of word-sense disambiguation are often based on a small number of contextual features or on a model that is assumed to characterize the interactions among a set of features. Model selection is presented as an alternative to these approaches, where a sequential search of possible models is conducted in order to find the model that best characterizes the interactions among features. This paper expands existing model selection methodology and presents the first comparative study of model selection search strategies and evaluation criteria when applied to the problem of building probabilistic classifiers for word-sense disambiguation.<|reference_end|> | arxiv | @article{pedersen1997sequential,
title={Sequential Model Selection for Word Sense Disambiguation},
author={Ted Pedersen (SMU), Rebecca Bruce (SMU), and Janyce Wiebe (NMSU)},
journal={Proceedings of the Fifth Conference on Applied Natural Language
Processing, April 1997, Washington, DC},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702008},
primaryClass={cmp-lg cs.CL}
} | pedersen1997sequential |
arxiv-668948 | cmp-lg/9702009 | Fast Statistical Parsing of Noun Phrases for Document Indexing | <|reference_start|>Fast Statistical Parsing of Noun Phrases for Document Indexing: Information Retrieval (IR) is an important application area of Natural Language Processing (NLP) where one encounters the genuine challenge of processing large quantities of unrestricted natural language text. While much effort has been made to apply NLP techniques to IR, very few NLP techniques have been evaluated on a document collection larger than several megabytes. Many NLP techniques are simply not efficient enough, and not robust enough, to handle a large amount of text. This paper proposes a new probabilistic model for noun phrase parsing, and reports on the application of such a parsing technique to enhance document indexing. The effectiveness of using syntactic phrases provided by the parser to supplement single words for indexing is evaluated with a 250 megabytes document collection. The experiment's results show that supplementing single words with syntactic phrases for indexing consistently and significantly improves retrieval performance.<|reference_end|> | arxiv | @article{zhai1997fast,
title={Fast Statistical Parsing of Noun Phrases for Document Indexing},
author={Chengxiang Zhai (Carnegie Mellon University)},
journal={arXiv preprint arXiv:cmp-lg/9702009},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702009},
primaryClass={cmp-lg cs.CL}
} | zhai1997fast |
arxiv-668949 | cmp-lg/9702010 | Selective Sampling of Effective Example Sentence Sets for Word Sense Disambiguation | <|reference_start|>Selective Sampling of Effective Example Sentence Sets for Word Sense Disambiguation: This paper proposes an efficient example selection method for example-based word sense disambiguation systems. To construct a practical size database, a considerable overhead for manual sense disambiguation is required. Our method is characterized by the reliance on the notion of the training utility: the degree to which each example is informative for future example selection when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectivity of our method through experiments on about one thousand sentences. Compared to experiments with random example selection, our method reduced the overhead without the degeneration of the performance of the system.<|reference_end|> | arxiv | @article{fujii1997selective,
title={Selective Sampling of Effective Example Sentence Sets for Word Sense
Disambiguation},
author={Atsushi Fujii, Kentaro Inui, Takenobu Tokunaga and Hozumi Tanaka
(Tokyo Institute of Technology)},
journal={Proceedings of the Fourth Workshop on Very Large Corpora WVLC-4,
pp. 56-69, 1996},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702010},
primaryClass={cmp-lg cs.CL}
} | fujii1997selective |
arxiv-668950 | cmp-lg/9702011 | How much has information technology contributed to linguistics? | <|reference_start|>How much has information technology contributed to linguistics?: Information technology should have much to offer linguistics, not only through the opportunities offered by large-scale data analysis and the stimulus to develop formal computational models, but through the chance to use language in systems for automatic natural language processing. The paper discusses these possibilities in detail, and then examines the actual work that has been done. It is evident that this has so far been primarily research within a new field, computational linguistics, which is largely motivated by the demands, and interest, of practical processing systems, and that information technology has had rather little influence on linguistics at large. There are different reasons for this, and not all good ones: information technology deserves more attention from linguists.<|reference_end|> | arxiv | @article{jones1997how,
title={How much has information technology contributed to linguistics?},
author={Karen Sparck Jones (Computer Laboratory, University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9702011},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702011},
primaryClass={cmp-lg cs.CL}
} | jones1997how |
arxiv-668951 | cmp-lg/9702012 | Design and Implementation of a Computational Lexicon for Turkish | <|reference_start|>Design and Implementation of a Computational Lexicon for Turkish: All natural language processing systems (such as parsers, generators, taggers) need to have access to a lexicon about the words in the language. This thesis presents a lexicon architecture for natural language processing in Turkish. Given a query form consisting of a surface form and other features acting as restrictions, the lexicon produces feature structures containing morphosyntactic, syntactic, and semantic information for all possible interpretations of the surface form satisfying those restrictions. The lexicon is based on contemporary approaches like feature-based representation, inheritance, and unification. It makes use of two information sources: a morphological processor and a lexical database containing all the open and closed-class words of Turkish. The system has been implemented in SICStus Prolog as a standalone module for use in natural language processing applications.<|reference_end|> | arxiv | @article{yorulmaz1997design,
title={Design and Implementation of a Computational Lexicon for Turkish},
author={Abdullah Kurtulus Yorulmaz},
journal={arXiv preprint arXiv:cmp-lg/9702012},
year={1997},
number={BU-CEIS-9701},
archivePrefix={arXiv},
eprint={cmp-lg/9702012},
primaryClass={cmp-lg cs.CL}
} | yorulmaz1997design |
arxiv-668952 | cmp-lg/9702013 | Knowledge Acquisition for Content Selection | <|reference_start|>Knowledge Acquisition for Content Selection: An important part of building a natural-language generation (NLG) system is knowledge acquisition, that is deciding on the specific schemas, plans, grammar rules, and so forth that should be used in the NLG system. We discuss some experiments we have performed with KA for content-selection rules, in the context of building an NLG system which generates health-related material. These experiments suggest that it is useful to supplement corpus analysis with KA techniques developed for building expert systems, such as structured group discussions and think-aloud protocols. They also raise the point that KA issues may influence architectural design issues, in particular the decision on whether a planning approach is used for content selection. We suspect that in some cases, KA may be easier if other constructive expert-system techniques (such as production rules, or case-based reasoning) are used to determine the content of a generated text.<|reference_end|> | arxiv | @article{reiter1997knowledge,
title={Knowledge Acquisition for Content Selection},
author={Ehud Reiter (Aberdeen), Alison Cawsey (Heriot-Watt), Liesl Osman
(Aberdeen), and Yvonne Roff (Valstar Systems)},
journal={arXiv preprint arXiv:cmp-lg/9702013},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702013},
primaryClass={cmp-lg cs.CL}
} | reiter1997knowledge |
arxiv-668953 | cmp-lg/9702014 | Building a Generation Knowledge Source using Internet-Accessible Newswire | <|reference_start|>Building a Generation Knowledge Source using Internet-Accessible Newswire: In this paper, we describe a method for automatic creation of a knowledge source for text generation using information extraction over the Internet. We present a prototype system called PROFILE which uses a client-server architecture to extract noun-phrase descriptions of entities such as people, places, and organizations. The system serves two purposes: as an information extraction tool, it allows users to search for textual descriptions of entities; as a utility to generate functional descriptions (FD), it is used in a functional-unification based generation system. We present an evaluation of the approach and its applications to natural language generation and summarization.<|reference_end|> | arxiv | @article{radev1997building,
title={Building a Generation Knowledge Source using Internet-Accessible
Newswire},
author={Dragomir R. Radev (Columbia University), Kathleen R. McKeown (Columbia
University)},
journal={To appear in Proceedings of the 5th Conference on Applied Natural
Processing, Washington DC, 31 March - 3 April, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702014},
primaryClass={cmp-lg cs.CL}
} | radev1997building |
arxiv-668954 | cmp-lg/9702015 | Improvising Linguistic Style: Social and Affective Bases for Agent Personality | <|reference_start|>Improvising Linguistic Style: Social and Affective Bases for Agent Personality: This paper introduces Linguistic Style Improvisation, a theory and set of algorithms for improvisation of spoken utterances by artificial agents, with applications to interactive story and dialogue systems. We argue that linguistic style is a key aspect of character, and show how speech act representations common in AI can provide abstract representations from which computer characters can improvise. We show that the mechanisms proposed introduce the possibility of socially oriented agents, meet the requirements that lifelike characters be believable, and satisfy particular criteria for improvisation proposed by Hayes-Roth.<|reference_end|> | arxiv | @article{walker1997improvising,
title={Improvising Linguistic Style: Social and Affective Bases for Agent
Personality},
author={Marilyn A. Walker, Janet E. Cahn and Stephen J. Whittaker},
journal={Proceedings of the First International Conference on Autonomous
Agents, Marina del Rey, California, USA. 1997. pp 96-105},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9702015},
primaryClass={cmp-lg cs.CL}
} | walker1997improvising |
arxiv-668955 | cmp-lg/9702016 | Instructions for Temporal Annotation of Scheduling Dialogs | <|reference_start|>Instructions for Temporal Annotation of Scheduling Dialogs: Human annotation of natural language facilitates standardized evaluation of natural language processing systems and supports automated feature extraction. This document consists of instructions for annotating the temporal information in scheduling dialogs, dialogs in which the participants schedule a meeting with one another. Task-oriented dialogs, such as these are, would arise in many useful applications, for instance, automated information providers and automated phone operators. Explicit instructions support good inter-rater reliability and serve as documentation for the classes being annotated.<|reference_end|> | arxiv | @article{o'hara1997instructions,
title={Instructions for Temporal Annotation of Scheduling Dialogs},
author={Tom O'Hara, Janyce Wiebe, and Karen Payne (New Mexico State
University)},
journal={arXiv preprint arXiv:cmp-lg/9702016},
year={1997},
number={MCCS-97-308},
archivePrefix={arXiv},
eprint={cmp-lg/9702016},
primaryClass={cmp-lg cs.CL}
} | o'hara1997instructions |
arxiv-668956 | cmp-lg/9703001 | Domain Adaptation with Clustered Language Models | <|reference_start|>Domain Adaptation with Clustered Language Models: In this paper, a method of domain adaptation for clustered language models is developed. It is based on a previously developed clustering algorithm, but with a modified optimisation criterion. The results are shown to be slightly superior to the previously published 'Fillup' method, which can be used to adapt standard n-gram models. However, the improvement both methods give compared to models built from scratch on the adaptation data is quite small (less than 11% relative improvement in word error rate). This suggests that both methods are still unsatisfactory from a practical point of view.<|reference_end|> | arxiv | @article{ueberla1997domain,
title={Domain Adaptation with Clustered Language Models},
author={Joerg P. Ueberla (Forum Technology - DRA Malvern)},
journal={arXiv preprint arXiv:cmp-lg/9703001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9703001},
primaryClass={cmp-lg cs.CL}
} | ueberla1997domain |
arxiv-668957 | cmp-lg/9703002 | Concept Clustering and Knowledge Integration from a Children's Dictionary | <|reference_start|>Concept Clustering and Knowledge Integration from a Children's Dictionary: Knowledge structures called Concept Clustering Knowledge Graphs (CCKGs) are introduced along with a process for their construction from a machine readable dictionary. CCKGs contain multiple concepts interrelated through multiple semantic relations together forming a semantic cluster represented by a conceptual graph. The knowledge acquisition is performed on a children's first dictionary. A collection of conceptual clusters together can form the basis of a lexical knowledge base, where each CCKG contains a limited number of highly connected words giving useful information about a particular domain or situation.<|reference_end|> | arxiv | @article{barriere1997concept,
title={Concept Clustering and Knowledge Integration from a Children's
Dictionary},
author={Caroline Barriere, Fred Popowich},
journal={COLING'96},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9703002},
primaryClass={cmp-lg cs.CL}
} | barriere1997concept |
arxiv-668958 | cmp-lg/9703003 | A Semantics-based Communication System for Dysphasic Subjects | <|reference_start|>A Semantics-based Communication System for Dysphasic Subjects: Dysphasic subjects do not have complete linguistic abilities and only produce a weakly structured, topicalized language. They are offered artificial symbolic languages to help them communicate in a way more adapted to their linguistic abilities. After a structural analysis of a corpus of utterances from children with cerebral palsy, we define a semantic lexicon for such a symbolic language. We use it as the basis of a semantic analysis process able to retrieve an interpretation of the utterances. This semantic analyser is currently used in an application designed to convert iconic languages into natural language; it might find other uses in the field of language rehabilitation.<|reference_end|> | arxiv | @article{vaillant1997a,
title={A Semantics-based Communication System for Dysphasic Subjects},
author={Pascal Vaillant (Thomson-CSF LCR)},
journal={arXiv preprint arXiv:cmp-lg/9703003},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9703003},
primaryClass={cmp-lg cs.CL}
} | vaillant1997a |
arxiv-668959 | cmp-lg/9703004 | Insights into the Dialogue Processing of VERBMOBIL | <|reference_start|>Insights into the Dialogue Processing of VERBMOBIL: We present the dialogue module of the speech-to-speech translation system VERBMOBIL. We follow the approach that the solution to dialogue processing in a mediating scenario can not depend on a single constrained processing tool, but on a combination of several simple, efficient, and robust components. We show how our solution to dialogue processing works when applied to real data, and give some examples where our module contributes to the correct translation from German to English.<|reference_end|> | arxiv | @article{alexandersson1997insights,
title={Insights into the Dialogue Processing of VERBMOBIL},
author={Jan Alexandersson, Norbert Reithinger, Elisabeth Maier (DFKI GmbH,
Saarbruecken, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9703004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9703004},
primaryClass={cmp-lg cs.CL}
} | alexandersson1997insights |
arxiv-668960 | cmp-lg/9703005 | Semi-Automatic Acquisition of Domain-Specific Translation Lexicons | <|reference_start|>Semi-Automatic Acquisition of Domain-Specific Translation Lexicons: We investigate the utility of an algorithm for translation lexicon acquisition (SABLE), used previously on a very large corpus to acquire general translation lexicons, when that algorithm is applied to a much smaller corpus to produce candidates for domain-specific translation lexicons.<|reference_end|> | arxiv | @article{resnik1997semi-automatic,
title={Semi-Automatic Acquisition of Domain-Specific Translation Lexicons},
author={Philip Resnik (University of Maryland) and I. Dan Melamed (University
of Pennsylvania)},
journal={Proceedings of the 5th ANLP Conference, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9703005},
primaryClass={cmp-lg cs.CL}
} | resnik1997semi-automatic |
arxiv-668961 | cmp-lg/9704001 | Evaluating Multilingual Gisting of Web Pages | <|reference_start|>Evaluating Multilingual Gisting of Web Pages: We describe a prototype system for multilingual gisting of Web pages, and present an evaluation methodology based on the notion of gisting as decision support. This evaluation paradigm is straightforward, rigorous, permits fair comparison of alternative approaches, and should easily generalize to evaluation in other situations where the user is faced with decision-making on the basis of information in restricted or alternative form.<|reference_end|> | arxiv | @article{resnik1997evaluating,
title={Evaluating Multilingual Gisting of Web Pages},
author={Philip Resnik (University of Maryland)},
journal={arXiv preprint arXiv:cmp-lg/9704001},
year={1997},
number={CS-TR-3783/LAMP-TR-009/UMIACS-TR-97-39},
archivePrefix={arXiv},
eprint={cmp-lg/9704001},
primaryClass={cmp-lg cs.CL}
} | resnik1997evaluating |
arxiv-668962 | cmp-lg/9704002 | A Maximum Entropy Approach to Identifying Sentence Boundaries | <|reference_start|>A Maximum Entropy Approach to Identifying Sentence Boundaries: We present a trainable model for identifying sentence boundaries in raw text. Given a corpus annotated with sentence boundaries, our model learns to classify each occurrence of ., ?, and ! as either a valid or invalid sentence boundary. The training procedure requires no hand-crafted rules, lexica, part-of-speech tags, or domain-specific information. The model can therefore be trained easily on any genre of English, and should be trainable on any other Roman-alphabet language. Performance is comparable to or better than the performance of similar systems, but we emphasize the simplicity of retraining for new domains.<|reference_end|> | arxiv | @article{reynar1997a,
title={A Maximum Entropy Approach to Identifying Sentence Boundaries},
author={Jeffrey C. Reynar (University of Pennsylvania), Adwait Ratnaparkhi
(University of Pennsylvania)},
journal={Proceedings of the 5th ANLP Conference, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704002},
primaryClass={cmp-lg cs.CL}
} | reynar1997a |
arxiv-668963 | cmp-lg/9704003 | Machine Transliteration | <|reference_start|>Machine Transliteration: It is challenging to translate names and technical terms across languages with different alphabets and sound inventories. These items are commonly transliterated, i.e., replaced with approximate phonetic equivalents. For example, "computer" in English comes out as "konpyuutaa" in Japanese. Translating such items from Japanese back to English is even more challenging, and of practical interest, as transliterated items make up the bulk of text phrases not found in bilingual dictionaries. We describe and evaluate a method for performing backwards transliterations by machine. This method uses a generative model, incorporating several distinct stages in the transliteration process.<|reference_end|> | arxiv | @article{knight1997machine,
title={Machine Transliteration},
author={Kevin Knight (USC/ISI) and Jonathan Graehl (USC)},
journal={arXiv preprint arXiv:cmp-lg/9704003},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704003},
primaryClass={cmp-lg cs.CL}
} | knight1997machine |
arxiv-668964 | cmp-lg/9704004 | PARADISE: A Framework for Evaluating Spoken Dialogue Agents | <|reference_start|>PARADISE: A Framework for Evaluating Spoken Dialogue Agents: This paper presents PARADISE (PARAdigm for DIalogue System Evaluation), a general framework for evaluating spoken dialogue agents. The framework decouples task requirements from an agent's dialogue behaviors, supports comparisons among dialogue strategies, enables the calculation of performance over subdialogues and whole dialogues, specifies the relative contribution of various factors to performance, and makes it possible to compare agents performing different tasks by normalizing for task complexity.<|reference_end|> | arxiv | @article{walker1997paradise:,
title={PARADISE: A Framework for Evaluating Spoken Dialogue Agents},
author={Marilyn A. Walker, Diane J. Litman, Candace A. Kamm and Alicia Abella
(ATT Labs - Research)},
journal={Proceedings of the 35th Annual Meeting of the Association for
Computational Linguistics},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704004},
primaryClass={cmp-lg cs.CL}
} | walker1997paradise: |
arxiv-668965 | cmp-lg/9704005 | Tracking Initiative in Collaborative Dialogue Interactions | <|reference_start|>Tracking Initiative in Collaborative Dialogue Interactions: In this paper, we argue for the need to distinguish between task and dialogue initiatives, and present a model for tracking shifts in both types of initiatives in dialogue interactions. Our model predicts the initiative holders in the next dialogue turn based on the current initiative holders and the effect that observed cues have on changing them. Our evaluation across various corpora shows that the use of cues consistently improves the accuracy in the system's prediction of task and dialogue initiative holders by 2-4 and 8-13 percentage points, respectively, thus illustrating the generality of our model.<|reference_end|> | arxiv | @article{chu-carroll1997tracking,
title={Tracking Initiative in Collaborative Dialogue Interactions},
author={Jennifer Chu-Carroll and Michael K. Brown (Bell Laboratories)},
journal={ACL-97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704005},
primaryClass={cmp-lg cs.CL}
} | chu-carroll1997tracking |
arxiv-668966 | cmp-lg/9704006 | Representing Constraints with Automata | <|reference_start|>Representing Constraints with Automata: In this paper we describe an approach to constraint-based syntactic theories in terms of finite tree automata. The solutions to constraints expressed in weak monadic second order (MSO) logic are represented by tree automata recognizing the assignments which make the formulas true. We show that this allows an efficient representation of knowledge about the content of constraints which can be used as a practical tool for grammatical theory verification. We achieve this by using the intertranslatability of formulas of MSO logic and tree automata and the embedding of MSO logic into a constraint logic programming scheme. The usefulness of the approach is discussed with examples from the realm of Principles-and-Parameters based parsing.<|reference_end|> | arxiv | @article{morawietz1997representing,
title={Representing Constraints with Automata},
author={Frank Morawietz and Tom Cornell (University of Tuebingen)},
journal={arXiv preprint arXiv:cmp-lg/9704006},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704006},
primaryClass={cmp-lg cs.CL}
} | morawietz1997representing |
arxiv-668967 | cmp-lg/9704007 | Combining Unsupervised Lexical Knowledge Methods for Word Sense Disambiguation | <|reference_start|>Combining Unsupervised Lexical Knowledge Methods for Word Sense Disambiguation: This paper presents a method to combine a set of unsupervised algorithms that can accurately disambiguate word senses in a large, completely untagged corpus. Although most of the techniques for word sense resolution have been presented as stand-alone, it is our belief that full-fledged lexical ambiguity resolution should combine several information sources and techniques. The set of techniques have been applied in a combined way to disambiguate the genus terms of two machine-readable dictionaries (MRD), enabling us to construct complete taxonomies for Spanish and French. Tested accuracy is above 80% overall and 95% for two-way ambiguous genus terms, showing that taxonomy building is not limited to structured dictionaries such as LDOCE.<|reference_end|> | arxiv | @article{rigau1997combining,
title={Combining Unsupervised Lexical Knowledge Methods for Word Sense
Disambiguation},
author={German Rigau, Jordi Atserias and Eneko Agirre},
journal={Proceedings of ACL'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704007},
primaryClass={cmp-lg cs.CL}
} | rigau1997combining |
arxiv-668968 | cmp-lg/9704008 | Intonational Boundaries, Speech Repairs and Discourse Markers: Modeling Spoken Dialog | <|reference_start|>Intonational Boundaries, Speech Repairs and Discourse Markers: Modeling Spoken Dialog: To understand a speaker's turn of a conversation, one needs to segment it into intonational phrases, clean up any speech repairs that might have occurred, and identify discourse markers. In this paper, we argue that these problems must be resolved together, and that they must be resolved early in the processing stream. We put forward a statistical language model that resolves these problems, does POS tagging, and can be used as the language model of a speech recognizer. We find that by accounting for the interactions between these tasks that the performance on each task improves, as does POS tagging and perplexity.<|reference_end|> | arxiv | @article{heeman1997intonational,
title={Intonational Boundaries, Speech Repairs and Discourse Markers: Modeling
Spoken Dialog},
author={Peter A. Heeman (University of Rochester), James F. Allen (University
of Rochester)},
journal={In proceedings of ACL/EACL'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704008},
primaryClass={cmp-lg cs.CL}
} | heeman1997intonational |
arxiv-668969 | cmp-lg/9704009 | Developing a hybrid NP parser | <|reference_start|>Developing a hybrid NP parser: We describe the use of energy function optimization in very shallow syntactic parsing. The approach can use linguistic rules and corpus-based statistics, so the strengths of both linguistic and statistical approaches to NLP can be combined in a single framework. The rules are contextual constraints for resolving syntactic ambiguities expressed as alternative tags, and the statistical language model consists of corpus-based n-grams of syntactic tags. The success of the hybrid syntactic disambiguator is evaluated against a held-out benchmark corpus. Also the contributions of the linguistic and statistical language models to the hybrid model are estimated.<|reference_end|> | arxiv | @article{voutilainen1997developing,
title={Developing a hybrid NP parser},
author={Atro Voutilainen and Lluis Padro},
journal={Proceedings of 5th ANLP, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704009},
primaryClass={cmp-lg cs.CL}
} | voutilainen1997developing |
arxiv-668970 | cmp-lg/9704010 | The Theoretical Status of Ontologies in Natural Language Processing | <|reference_start|>The Theoretical Status of Ontologies in Natural Language Processing: This paper discusses the use of `ontologies' in Natural Language Processing. It classifies various kinds of ontologies that have been employed in NLP and discusses various benefits and problems with those designs. Particular focus is then placed on experiences gained in the use of the Upper Model, a linguistically-motivated `ontology' originally designed for use with the Penman text generation system. Some proposals for further NLP ontology design criteria are then made.<|reference_end|> | arxiv | @article{bateman1997the,
title={The Theoretical Status of Ontologies in Natural Language Processing},
author={John A. Bateman (Dept. of English, University of Stirling)},
journal={arXiv preprint arXiv:cmp-lg/9704010},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704010},
primaryClass={cmp-lg cs.CL}
} | bateman1997the |
arxiv-668971 | cmp-lg/9704011 | Morphological Disambiguation by Voting Constraints | <|reference_start|>Morphological Disambiguation by Voting Constraints: We present a constraint-based morphological disambiguation system in which individual constraints vote on matching morphological parses, and disambiguation of all the tokens in a sentence is performed at the end by selecting parses that receive the highest votes. This constraint application paradigm makes the outcome of the disambiguation independent of the rule sequence, and hence relieves the rule developer from worrying about potentially conflicting rule sequencing. Our results for disambiguating Turkish indicate that using about 500 constraint rules and some additional simple statistics, we can attain a recall of 95-96% and a precision of 94-95% with about 1.01 parses per token. Our system is implemented in Prolog and we are currently investigating an efficient implementation based on finite state transducers.<|reference_end|> | arxiv | @article{oflazer1997morphological,
title={Morphological Disambiguation by Voting Constraints},
author={Kemal Oflazer and Gokhan Tur (Bilkent University, Ankara, Turkey)},
journal={arXiv preprint arXiv:cmp-lg/9704011},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704011},
primaryClass={cmp-lg cs.CL}
} | oflazer1997morphological |
arxiv-668972 | cmp-lg/9704012 | Emphatic generation: employing the theory of semantic emphasis for text generation | <|reference_start|>Emphatic generation: employing the theory of semantic emphasis for text generation: The paper deals with the problem of text generation and planning approaches making only limited formally specifiable contact with accounts of grammar. We propose an enhancement of a systemically-based generation architecture for German (the KOMET system) by aspects of Kunze's theory of semantic emphasis. Doing this, we gain more control over both concept selection in generation and choice of fine-grained grammatical variation.<|reference_end|> | arxiv | @article{teich1997emphatic,
title={Emphatic generation: employing the theory of semantic emphasis for text
generation},
author={Elke Teich, Beate Firzlaff and John A. Bateman (GMD, Darmstadt)},
journal={arXiv preprint arXiv:cmp-lg/9704012},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704012},
primaryClass={cmp-lg cs.CL}
} | teich1997emphatic |
arxiv-668973 | cmp-lg/9704013 | A Theory of Parallelism and the Case of VP Ellipsis | <|reference_start|>A Theory of Parallelism and the Case of VP Ellipsis: We provide a general account of parallelism in discourse, and apply it to the special case of resolving possible readings for instances of VP ellipsis. We show how several problematic examples are accounted for in a natural and straightforward fashion. The generality of the approach makes it directly applicable to a variety of other types of ellipsis and reference.<|reference_end|> | arxiv | @article{hobbs1997a,
title={A Theory of Parallelism and the Case of VP Ellipsis},
author={Jerry R. Hobbs and Andrew Kehler (SRI International)},
journal={Proceedings of ACL-97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704013},
primaryClass={cmp-lg cs.CL}
} | hobbs1997a |
arxiv-668974 | cmp-lg/9704014 | Centering in-the-large: Computing referential discourse segments | <|reference_start|>Centering in-the-large: Computing referential discourse segments: We specify an algorithm that builds up a hierarchy of referential discourse segments from local centering data. The spatial extension and nesting of these discourse segments constrain the reachability of potential antecedents of an anaphoric expression beyond the local level of adjacent center pairs. Thus, the centering model is scaled up to the level of the global referential structure of discourse. An empirical evaluation of the algorithm is supplied.<|reference_end|> | arxiv | @article{hahn1997centering,
title={Centering in-the-large: Computing referential discourse segments},
author={Udo Hahn and Michael Strube (Computational Linguistics Research Group,
Freiburg University, Freiburg, Germany)},
journal={Proceedings of ACL 97 / EACL 97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9704014},
primaryClass={cmp-lg cs.CL}
} | hahn1997centering |
arxiv-668975 | cmp-lg/9705001 | Co-evolution of Language and of the Language Acquisition Device | <|reference_start|>Co-evolution of Language and of the Language Acquisition Device: A new account of parameter setting during grammatical acquisition is presented in terms of Generalized Categorial Grammar embedded in a default inheritance hierarchy, providing a natural partial ordering on the setting of parameters. Experiments show that several experimentally effective learners can be defined in this framework. Evolutionary simulations suggest that a learner with default initial settings for parameters will emerge, provided that learning is memory limited and the environment of linguistic adaptation contains an appropriate language.<|reference_end|> | arxiv | @article{briscoe1997co-evolution,
title={Co-evolution of Language and of the Language Acquisition Device},
author={Ted Briscoe (Computer Laboratory, University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9705001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705001},
primaryClass={cmp-lg cs.CL}
} | briscoe1997co-evolution |
arxiv-668976 | cmp-lg/9705002 | Sloppy Identity | <|reference_start|>Sloppy Identity: Although sloppy interpretation is usually accounted for by theories of ellipsis, it often arises in non-elliptical contexts. In this paper, a theory of sloppy interpretation is provided which captures this fact. The underlying idea is that sloppy interpretation results from a semantic constraint on parallel structures and the theory is shown to predict sloppy readings for deaccented and paycheck sentences as well as relational-, event-, and one-anaphora. It is further shown to capture the interaction of sloppy/strict ambiguity with quantification and binding.<|reference_end|> | arxiv | @article{gardent1997sloppy,
title={Sloppy Identity},
author={Claire Gardent (University of the Saarland, Saarbruecken, Germany)},
journal={Logical Aspects of Computational Linguistics, Springer-Verlag.},
year={1997},
number={CLAUS Nr.88, University of Saarbruecken},
archivePrefix={arXiv},
eprint={cmp-lg/9705002},
primaryClass={cmp-lg cs.CL}
} | gardent1997sloppy |
arxiv-668977 | cmp-lg/9705003 | Grammatical analysis in the OVIS spoken-dialogue system | <|reference_start|>Grammatical analysis in the OVIS spoken-dialogue system: We argue that grammatical processing is a viable alternative to concept spotting for processing spoken input in a practical dialogue system. We discuss the structure of the grammar, the properties of the parser, and a method for achieving robustness. We discuss test results suggesting that grammatical processing allows fast and accurate processing of spoken input.<|reference_end|> | arxiv | @article{nederhof1997grammatical,
title={Grammatical analysis in the OVIS spoken-dialogue system},
author={Mark-Jan Nederhof, Gosse Bouma, Rob Koeling, Gertjan van Noord
(University of Groningen, Humanities Computing)},
journal={ACL/EACL 1997 Workshop on Spoken Dialog Systems},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705003},
primaryClass={cmp-lg cs.CL}
} | nederhof1997grammatical |
arxiv-668978 | cmp-lg/9705004 | Computing Parallelism in Discourse | <|reference_start|>Computing Parallelism in Discourse: Although much has been said about parallelism in discourse, a formal, computational theory of parallelism structure is still outstanding. In this paper, we present a theory which given two parallel utterances predicts which are the parallel elements. The theory consists of a sorted, higher-order abductive calculus and we show that it reconciles the insights of discourse theories of parallelism with those of Higher-Order Unification approaches to discourse semantics, thereby providing a natural framework in which to capture the effect of parallelism on discourse semantics.<|reference_end|> | arxiv | @article{gardent1997computing,
title={Computing Parallelism in Discourse},
author={Claire Gardent and Michael Kohlhase (Universitaet des Saarlandes,
Saarbruecken, Germany)},
journal={Proceedings of IJCAI'97},
year={1997},
number={CLAUS Nr. 90},
archivePrefix={arXiv},
eprint={cmp-lg/9705004},
primaryClass={cmp-lg cs.CL}
} | gardent1997computing |
arxiv-668979 | cmp-lg/9705005 | Document Classification Using a Finite Mixture Model | <|reference_start|>Document Classification Using a Finite Mixture Model: We propose a new method of classifying documents into categories. The simple method of conducting hypothesis testing over word-based distributions in categories suffers from the data sparseness problem. In order to address this difficulty, Guthrie et.al. have developed a method using distributions based on hard clustering of words, i.e., in which a word is assigned to a single cluster and words in the same cluster are treated uniformly. This method might, however, degrade classification results, since the distributions it employs are not always precise enough for representing the differences between categories. We propose here the use of soft clustering of words, i.e., in which a word can be assigned to several different clusters and each cluster is characterized by a specific word probability distribution. We define for each document category a finite mixture model, which is a linear combination of the probability distributions of the clusters. We thereby treat the problem of classifying documents as that of conducting statistical hypothesis testing over finite mixture models. In order to accomplish this testing, we employ the EM algorithm which helps efficiently estimate parameters in a finite mixture model. Experimental results indicate that our method outperforms not only the method using distributions based on hard clustering, but also the method using word-based distributions and the method based on cosine-similarity.<|reference_end|> | arxiv | @article{li1997document,
title={Document Classification Using a Finite Mixture Model},
author={Hang Li, Kenji Yamanishi (C&C Res. Labs., NEC)},
journal={arXiv preprint arXiv:cmp-lg/9705005},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705005},
primaryClass={cmp-lg cs.CL}
} | li1997document |
arxiv-668980 | cmp-lg/9705006 | Quantitative Constraint Logic Programming for Weighted Grammar Applications | <|reference_start|>Quantitative Constraint Logic Programming for Weighted Grammar Applications: Constraint logic grammars provide a powerful formalism for expressing complex logical descriptions of natural language phenomena in exact terms. Describing some of these phenomena may, however, require some form of graded distinctions which are not provided by such grammars. Recent approaches to weighted constraint logic grammars attempt to address this issue by adding numerical calculation schemata to the deduction scheme of the underlying CLP framework. Currently, these extralogical extensions are not related to the model-theoretic counterpart of the operational semantics of CLP, i.e., they do not come with a formal semantics at all. The aim of this paper is to present a clear formal semantics for weighted constraint logic grammars, which abstracts away from specific interpretations of weights, but nevertheless gives insights into the parsing problem for such weighted grammars. Building on the formalization of constraint logic grammars in the CLP scheme of Hoehfeld and Smolka 1988, this formal semantics will be given by a quantitative version of CLP. Such a quantitative CLP scheme can also be valuable for CLP tasks independent of grammars.<|reference_end|> | arxiv | @article{riezler1997quantitative,
title={Quantitative Constraint Logic Programming for Weighted Grammar
Applications},
author={Stefan Riezler (University of Tuebingen)},
journal={Logical Aspects of Computational Linguistics (LACL'96), LNCS,
Springer.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705006},
primaryClass={cmp-lg cs.CL}
} | riezler1997quantitative |
arxiv-668981 | cmp-lg/9705007 | Recycling Lingware in a Multilingual MT System | <|reference_start|>Recycling Lingware in a Multilingual MT System: We describe two methods relevant to multi-lingual machine translation systems, which can be used to port linguistic data (grammars, lexicons and transfer rules) between systems used for processing related languages. The methods are fully implemented within the Spoken Language Translator system, and were used to create versions of the system for two new language pairs using only a month of expert effort.<|reference_end|> | arxiv | @article{rayner1997recycling,
title={Recycling Lingware in a Multilingual MT System},
author={Manny Rayner and David Carter (SRI International, Cambridge), Ivan
Bretan, Robert Eklund and Mats Wiren (Telia Research), Steffen Leo Hansen,
Sabine Kirchmeier-Andersen, Christina Philp, Finn Sorensen and Hanne Erdman
Thomsen (Copenhagen Business School)},
journal={arXiv preprint arXiv:cmp-lg/9705007},
year={1997},
number={CRC-067},
archivePrefix={arXiv},
eprint={cmp-lg/9705007},
primaryClass={cmp-lg cs.CL}
} | rayner1997recycling |
arxiv-668982 | cmp-lg/9705008 | The TreeBanker: a Tool for Supervised Training of Parsed Corpora | <|reference_start|>The TreeBanker: a Tool for Supervised Training of Parsed Corpora: I describe the TreeBanker, a graphical tool for the supervised training involved in domain customization of the disambiguation component of a speech- or language-understanding system. The TreeBanker presents a user, who need not be a system expert, with a range of properties that distinguish competing analyses for an utterance and that are relatively easy to judge. This allows training on a corpus to be completed in far less time, and with far less expertise, than would be needed if analyses were inspected directly: it becomes possible for a corpus of about 20,000 sentences of the complexity of those in the ATIS corpus to be judged in around three weeks of work by a linguistically aware non-expert.<|reference_end|> | arxiv | @article{carter1997the,
title={The TreeBanker: a Tool for Supervised Training of Parsed Corpora},
author={David Carter (SRI International, Cambridge)},
journal={"Computational Environments ..." (ENVGRAM) workshop at ACL-97},
year={1997},
number={CRC-068 at http://www.cam.sri.com},
archivePrefix={arXiv},
eprint={cmp-lg/9705008},
primaryClass={cmp-lg cs.CL}
} | carter1997the |
arxiv-668983 | cmp-lg/9705009 | Charts, Interaction-Free Grammars, and the Compact Representation of Ambiguity | <|reference_start|>Charts, Interaction-Free Grammars, and the Compact Representation of Ambiguity: Recently researchers working in the LFG framework have proposed algorithms for taking advantage of the implicit context-free components of a unification grammar [Maxwell 96]. This paper clarifies the mathematical foundations of these techniques, provides a uniform framework in which they can be formally studied and eliminates the need for special purpose runtime data-structures recording ambiguity. The paper posits the identity: Ambiguous Feature Structures = Grammars, which states that (finitely) ambiguous representations are best seen as unification grammars of a certain type, here called ``interaction-free'' grammars, which generate in a backtrack-free way each of the feature structures subsumed by the ambiguous representation. This work extends a line of research [Billot and Lang 89, Lang 94] which stresses the connection between charts and grammars: a chart can be seen as a specialization of the reference grammar for a given input string. We show how this specialization grammar can be transformed into an interaction-free form which has the same practicality as a listing of the individual solutions, but is produced in less time and space.<|reference_end|> | arxiv | @article{dymetman1997charts,,
title={Charts, Interaction-Free Grammars, and the Compact Representation of
Ambiguity},
author={Marc Dymetman (Rank Xerox Research Centre, Grenoble)},
journal={arXiv preprint arXiv:cmp-lg/9705009},
year={1997},
number={MLTT-029},
archivePrefix={arXiv},
eprint={cmp-lg/9705009},
primaryClass={cmp-lg cs.CL}
} | dymetman1997charts, |
arxiv-668984 | cmp-lg/9705010 | Memory-Based Learning: Using Similarity for Smoothing | <|reference_start|>Memory-Based Learning: Using Similarity for Smoothing: This paper analyses the relation between the use of similarity in Memory-Based Learning and the notion of backed-off smoothing in statistical language modeling. We show that the two approaches are closely related, and we argue that feature weighting methods in the Memory-Based paradigm can offer the advantage of automatically specifying a suitable domain-specific hierarchy between most specific and most general conditioning information without the need for a large number of parameters. We report two applications of this approach: PP-attachment and POS-tagging. Our method achieves state-of-the-art performance in both domains, and allows the easy integration of diverse information sources, such as rich lexical representations.<|reference_end|> | arxiv | @article{zavrel1997memory-based,
title={Memory-Based Learning: Using Similarity for Smoothing},
author={Jakub Zavrel and Walter Daelemans},
journal={arXiv preprint arXiv:cmp-lg/9705010},
year={1997},
number={ILK-9702},
archivePrefix={arXiv},
eprint={cmp-lg/9705010},
primaryClass={cmp-lg cs.CL}
} | zavrel1997memory-based |
arxiv-668985 | cmp-lg/9705011 | A Lexicon for Underspecified Semantic Tagging | <|reference_start|>A Lexicon for Underspecified Semantic Tagging: The paper defends the notion that semantic tagging should be viewed as more than disambiguation between senses. Instead, semantic tagging should be a first step in the interpretation process by assigning each lexical item a representation of all of its systematically related senses, from which further semantic processing steps can derive discourse dependent interpretations. This leads to a new type of semantic lexicon (CoreLex) that supports underspecified semantic tagging through a design based on systematic polysemous classes and a class-based acquisition of lexical knowledge for specific domains.<|reference_end|> | arxiv | @article{buitelaar1997a,
title={A Lexicon for Underspecified Semantic Tagging},
author={Paul Buitelaar (Dept. of Computer Science, Brandeis University)},
journal={arXiv preprint arXiv:cmp-lg/9705011},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705011},
primaryClass={cmp-lg cs.CL}
} | buitelaar1997a |
arxiv-668986 | cmp-lg/9705012 | A Comparative Study of the Application of Different Learning Techniques to Natural Language Interfaces | <|reference_start|>A Comparative Study of the Application of Different Learning Techniques to Natural Language Interfaces: In this paper we present first results from a comparative study. Its aim is to test the feasibility of different inductive learning techniques to perform the automatic acquisition of linguistic knowledge within a natural language database interface. In our interface architecture the machine learning module replaces an elaborate semantic analysis component. The learning module learns the correct mapping of a user's input to the corresponding database command based on a collection of past input data. We use an existing interface to a production planning and control system as evaluation and compare the results achieved by different instance-based and model-based learning algorithms.<|reference_end|> | arxiv | @article{winiwarter1997a,
title={A Comparative Study of the Application of Different Learning Techniques
to Natural Language Interfaces},
author={Werner Winiwarter, Yahiko Kambayashi (Dept. of Information Science,
Kyoto University)},
journal={arXiv preprint arXiv:cmp-lg/9705012},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705012},
primaryClass={cmp-lg cs.CL}
} | winiwarter1997a |
arxiv-668987 | cmp-lg/9705013 | FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text | <|reference_start|>FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text: FASTUS is a system for extracting information from natural language text for entry into a database and for other applications. It works essentially as a cascaded, nondeterministic finite-state automaton. There are five stages in the operation of FASTUS. In Stage 1, names and other fixed form expressions are recognized. In Stage 2, basic noun groups, verb groups, and prepositions and some other particles are recognized. In Stage 3, certain complex noun groups and verb groups are constructed. Patterns for events of interest are identified in Stage 4 and corresponding ``event structures'' are built. In Stage 5, distinct event structures that describe the same event are identified and merged, and these are used in generating database entries. This decomposition of language processing enables the system to do exactly the right amount of domain-independent syntax, so that domain-dependent semantic and pragmatic processing can be applied to the right larger-scale structures. FASTUS is very efficient and effective, and has been used successfully in a number of applications.<|reference_end|> | arxiv | @article{hobbs1997fastus:,
title={FASTUS: A Cascaded Finite-State Transducer for Extracting Information
from Natural-Language Text},
author={Jerry R. Hobbs, Douglas Appelt, John Bear, David Israel, Megumi
Kameyama, Mark Stickel, and Mabry Tyson (Artificial Intelligence Center, SRI
International, Menlo Park, California)},
journal={In Roche E. and Y. Schabes, eds., Finite-State Language
Processing, The MIT Press, Cambridge, MA, 1997, pages 383-406.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705013},
primaryClass={cmp-lg cs.CL}
} | hobbs1997fastus: |
arxiv-668988 | cmp-lg/9705014 | Incorporating POS Tagging into Language Modeling | <|reference_start|>Incorporating POS Tagging into Language Modeling: Language models for speech recognition tend to concentrate solely on recognizing the words that were spoken. In this paper, we redefine the speech recognition problem so that its goal is to find both the best sequence of words and their syntactic role (part-of-speech) in the utterance. This is a necessary first step towards tightening the interaction between speech recognition and natural language understanding.<|reference_end|> | arxiv | @article{heeman1997incorporating,
title={Incorporating POS Tagging into Language Modeling},
author={Peter A. Heeman (CNET, France Telecom), James F. Allen (University of
Rochester)},
journal={In proceedings of Eurospeech'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705014},
primaryClass={cmp-lg cs.CL}
} | heeman1997incorporating |
arxiv-668989 | cmp-lg/9705015 | Translation Methodology in the Spoken Language Translator: An Evaluation | <|reference_start|>Translation Methodology in the Spoken Language Translator: An Evaluation: In this paper we describe how the translation methodology adopted for the Spoken Language Translator (SLT) addresses the characteristics of the speech translation task in a context where it is essential to achieve easy customization to new languages and new domains. We then discuss the issues that arise in any attempt to evaluate a speech translator, and present the results of such an evaluation carried out on SLT for several language pairs.<|reference_end|> | arxiv | @article{carter1997translation,
title={Translation Methodology in the Spoken Language Translator: An Evaluation},
author={David Carter, Ralph Becket and Manny Rayner (SRI International,
Cambridge); Robert Eklund, Catriona MacDermid and Mats Wiren (Telia
Research); Sabine Kirchmeier-Andersen and Christina Philp (Copenhagen
Business School)},
journal={arXiv preprint arXiv:cmp-lg/9705015},
year={1997},
number={CRC-070 at http://www.cam.sri.com},
archivePrefix={arXiv},
eprint={cmp-lg/9705015},
primaryClass={cmp-lg cs.CL}
} | carter1997translation |
arxiv-668990 | cmp-lg/9705016 | Sense Tagging: Semantic Tagging with a Lexicon | <|reference_start|>Sense Tagging: Semantic Tagging with a Lexicon: Sense tagging, the automatic assignment of the appropriate sense from some lexicon to each of the words in a text, is a specialised instance of the general problem of semantic tagging by category or type. We discuss which recent word sense disambiguation algorithms are appropriate for sense tagging. It is our belief that sense tagging can be carried out effectively by combining several simple, independent, methods and we include the design of such a tagger. A prototype of this system has been implemented, correctly tagging 86% of polysemous word tokens in a small test set, providing evidence that our hypothesis is correct.<|reference_end|> | arxiv | @article{wilks1997sense,
title={Sense Tagging: Semantic Tagging with a Lexicon},
author={Yorick Wilks and Mark Stevenson},
journal={arXiv preprint arXiv:cmp-lg/9705016},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9705016},
primaryClass={cmp-lg cs.CL}
} | wilks1997sense |
arxiv-668991 | cmp-lg/9706001 | Assigning Grammatical Relations with a Back-off Model | <|reference_start|>Assigning Grammatical Relations with a Back-off Model: This paper presents a corpus-based method to assign grammatical subject/object relations to ambiguous German constructs. It makes use of an unsupervised learning procedure to collect training and test data, and the back-off model to make assignment decisions.<|reference_end|> | arxiv | @article{de lima1997assigning,
title={Assigning Grammatical Relations with a Back-off Model},
author={Erika F. de Lima},
journal={arXiv preprint arXiv:cmp-lg/9706001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706001},
primaryClass={cmp-lg cs.CL}
} | de lima1997assigning |
arxiv-668992 | cmp-lg/9706002 | Learning Parse and Translation Decisions From Examples With Rich Context | <|reference_start|>Learning Parse and Translation Decisions From Examples With Rich Context: We present a knowledge and context-based system for parsing and translating natural language and evaluate it on sentences from the Wall Street Journal. Applying machine learning techniques, the system uses parse action examples acquired under supervision to generate a deterministic shift-reduce parser in the form of a decision structure. It relies heavily on context, as encoded in features which describe the morphological, syntactic, semantic and other aspects of a given parse state.<|reference_end|> | arxiv | @article{hermjakob1997learning,
title={Learning Parse and Translation Decisions From Examples With Rich Context},
author={Ulf Hermjakob, Raymond J. Mooney (Dept. of Computer Sciences,
University of Texas at Austin)},
journal={Proceedings of ACL/EACL'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706002},
primaryClass={cmp-lg cs.CL}
} | hermjakob1997learning |
arxiv-668993 | cmp-lg/9706003 | Three New Probabilistic Models for Dependency Parsing: An Exploration | <|reference_start|>Three New Probabilistic Models for Dependency Parsing: An Exploration: After presenting a novel O(n^3) parsing algorithm for dependency grammar, we develop three contrasting ways to stochasticize it. We propose (a) a lexical affinity model where words struggle to modify each other, (b) a sense tagging model where words fluctuate randomly in their selectional preferences, and (c) a generative model where the speaker fleshes out each word's syntactic and conceptual structure without regard to the implications for the hearer. We also give preliminary empirical results from evaluating the three models' parsing performance on annotated Wall Street Journal training text (derived from the Penn Treebank). In these results, the generative (i.e., top-down) model performs significantly better than the others, and does about equally well at assigning part-of-speech tags.<|reference_end|> | arxiv | @article{eisner1997three,
title={Three New Probabilistic Models for Dependency Parsing: An Exploration},
author={Jason Eisner (Univ. of Pennsylvania)},
journal={Proceedings of the 16th International Conference on Computational
Linguistics (COLING-96), Copenhagen, August 1996, pp. 340-345},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706003},
primaryClass={cmp-lg cs.CL}
} | eisner1997three |
arxiv-668994 | cmp-lg/9706004 | An Empirical Comparison of Probability Models for Dependency Grammar | <|reference_start|>An Empirical Comparison of Probability Models for Dependency Grammar: This technical report is an appendix to Eisner (1996): it gives superior experimental results that were reported only in the talk version of that paper. Eisner (1996) trained three probability models on a small set of about 4,000 conjunction-free, dependency-grammar parses derived from the Wall Street Journal section of the Penn Treebank, and then evaluated the models on a held-out test set, using a novel O(n^3) parsing algorithm. The present paper describes some details of the experiments and repeats them with a larger training set of 25,000 sentences. As reported at the talk, the more extensive training yields greatly improved performance. Nearly half the sentences are parsed with no misattachments; two-thirds are parsed with at most one misattachment. Of the models described in the original written paper, the best score is still obtained with the generative (top-down) "model C." However, slightly better models are also explored, in particular, two variants on the comprehension (bottom-up) "model B." The better of these has an attachment accuracy of 90%, and (unlike model C) tags words more accurately than the comparable trigram tagger. Differences are statistically significant. If tags are roughly known in advance, search error is all but eliminated and the new model attains an attachment accuracy of 93%. We find that the parser of Collins (1996), when combined with a highly-trained tagger, also achieves 93% when trained and tested on the same sentences. Similarities and differences are discussed.<|reference_end|> | arxiv | @article{eisner1997an,
title={An Empirical Comparison of Probability Models for Dependency Grammar},
author={Jason Eisner (Univ. of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9706004},
year={1997},
number={Technical report IRCS-96-11, Institute for Research in Cognitive
Science, U. of Pennsylvania},
archivePrefix={arXiv},
eprint={cmp-lg/9706004},
primaryClass={cmp-lg cs.CL}
} | eisner1997an |
arxiv-668995 | cmp-lg/9706005 | Comparing a Linguistic and a Stochastic Tagger | <|reference_start|>Comparing a Linguistic and a Stochastic Tagger: Concerning different approaches to automatic PoS tagging: EngCG-2, a constraint-based morphological tagger, is compared in a double-blind test with a state-of-the-art statistical tagger on a common disambiguation task using a common tag set. The experiments show that for the same amount of remaining ambiguity, the error rate of the statistical tagger is one order of magnitude greater than that of the rule-based one. The two related issues of priming effects compromising the results and disagreement between human annotators are also addressed.<|reference_end|> | arxiv | @article{samuelsson1997comparing,
title={Comparing a Linguistic and a Stochastic Tagger},
author={Christer Samuelsson (Lucent Technologies), Atro Voutilainen
(University of Helsinki)},
journal={arXiv preprint arXiv:cmp-lg/9706005},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706005},
primaryClass={cmp-lg cs.CL}
} | samuelsson1997comparing |
arxiv-668996 | cmp-lg/9706006 | Mistake-Driven Learning in Text Categorization | <|reference_start|>Mistake-Driven Learning in Text Categorization: Learning problems in the text processing domain often map the text to a space whose dimensions are the measured features of the text, e.g., its words. Three characteristic properties of this domain are (a) very high dimensionality, (b) both the learned concepts and the instances reside very sparsely in the feature space, and (c) a high variation in the number of active features in an instance. In this work we study three mistake-driven learning algorithms for a typical task of this nature -- text categorization. We argue that these algorithms -- which categorize documents by learning a linear separator in the feature space -- have a few properties that make them ideal for this domain. We then show that a quantum leap in performance is achieved when we further modify the algorithms to better address some of the specific characteristics of the domain. In particular, we demonstrate (1) how variation in document length can be tolerated by either normalizing feature weights or by using negative weights, (2) the positive effect of applying a threshold range in training, (3) alternatives in considering feature frequency, and (4) the benefits of discarding features while training. Overall, we present an algorithm, a variation of Littlestone's Winnow, which performs significantly better than any other algorithm tested on this task using a similar feature set.<|reference_end|> | arxiv | @article{dagan1997mistake-driven,
title={Mistake-Driven Learning in Text Categorization},
author={Ido Dagan (Bar Ilan University, Israel), Yael Karov (Weizmann
Institute, Israel), Dan Roth (Weizmann Institute, Israel)},
journal={arXiv preprint arXiv:cmp-lg/9706006},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706006},
primaryClass={cmp-lg cs.CL}
} | dagan1997mistake-driven |
arxiv-668997 | cmp-lg/9706007 | Aggregate and mixed-order Markov models for statistical language processing | <|reference_start|>Aggregate and mixed-order Markov models for statistical language processing: We consider the use of language models whose size and accuracy are intermediate between different order n-gram models. Two types of models are studied in particular. Aggregate Markov models are class-based bigram models in which the mapping from words to classes is probabilistic. Mixed-order Markov models combine bigram models whose predictions are conditioned on different words. Both types of models are trained by Expectation-Maximization (EM) algorithms for maximum likelihood estimation. We examine smoothing procedures in which these models are interposed between different order n-grams. This is found to significantly reduce the perplexity of unseen word combinations.<|reference_end|> | arxiv | @article{saul1997aggregate,
title={Aggregate and mixed-order Markov models for statistical language
processing},
author={Lawrence Saul and Fernando Pereira (AT&T Labs -- Research)},
journal={arXiv preprint arXiv:cmp-lg/9706007},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706007},
primaryClass={cmp-lg cs.CL}
} | saul1997aggregate |
arxiv-668998 | cmp-lg/9706008 | Distinguishing Word Senses in Untagged Text | <|reference_start|>Distinguishing Word Senses in Untagged Text: This paper describes an experimental comparison of three unsupervised learning algorithms that distinguish the sense of an ambiguous word in untagged text. The methods described in this paper, McQuitty's similarity analysis, Ward's minimum-variance method, and the EM algorithm, assign each instance of an ambiguous word to a known sense definition based solely on the values of automatically identifiable features in text. These methods and feature sets are found to be more successful in disambiguating nouns rather than adjectives or verbs. Overall, the most accurate of these procedures is McQuitty's similarity analysis in combination with a high dimensional feature set.<|reference_end|> | arxiv | @article{pedersen1997distinguishing,
title={Distinguishing Word Senses in Untagged Text},
author={Ted Pedersen (Southern Methodist University) and Rebecca Bruce
(Southern Methodist University)},
journal={Appears in the Proceedings of the Second Conference on Empirical
Methods in NLP (EMNLP-2), August 1-2, 1997, Providence, RI},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706008},
primaryClass={cmp-lg cs.CL}
} | pedersen1997distinguishing |
arxiv-668999 | cmp-lg/9706009 | Library of Practical Abstractions, Release 12 | <|reference_start|>Library of Practical Abstractions, Release 12: The library of practical abstractions (LIBPA) provides efficient implementations of conceptually simple abstractions, in the C programming language. We believe that the best library code is conceptually simple so that it will be easily understood by the application programmer; parameterized by type so that it enjoys wide applicability; and at least as efficient as a straightforward special-purpose implementation. You will find that our software satisfies the highest standards of software design, implementation, testing, and benchmarking. The current LIBPA release is a source code distribution only. It consists of modules for portable memory management, one dimensional arrays of arbitrary types, compact symbol tables, hash tables for arbitrary types, a trie module for length-delimited strings over arbitrary alphabets, single precision floating point numbers with extended exponents, and logarithmic representations of probability values using either fixed or floating point numbers. We have used LIBPA to implement a wide range of statistical models for both continuous and discrete domains. The time and space efficiency of LIBPA has allowed us to build larger statistical models than previously reported, and to investigate more computationally-intensive techniques than previously possible. We have found LIBPA to be indispensible in our own research, and hope that you will find it useful in yours.<|reference_end|> | arxiv | @article{ristad1997library,
title={Library of Practical Abstractions, Release 1.2},
author={Eric Sven Ristad and Peter N. Yianilos},
journal={arXiv preprint arXiv:cmp-lg/9706009},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706009},
primaryClass={cmp-lg cs.CL}
} | ristad1997library |
arxiv-669000 | cmp-lg/9706010 | Exemplar-Based Word Sense Disambiguation: Some Recent Improvements | <|reference_start|>Exemplar-Based Word Sense Disambiguation: Some Recent Improvements: In this paper, we report recent improvements to the exemplar-based learning approach for word sense disambiguation that have achieved higher disambiguation accuracy. By using a larger value of $k$, the number of nearest neighbors to use for determining the class of a test example, and through 10-fold cross validation to automatically determine the best $k$, we have obtained improved disambiguation accuracy on a large sense-tagged corpus first used in \cite{ng96}. The accuracy achieved by our improved exemplar-based classifier is comparable to the accuracy on the same data set obtained by the Naive-Bayes algorithm, which was reported in \cite{mooney96} to have the highest disambiguation accuracy among seven state-of-the-art machine learning algorithms.<|reference_end|> | arxiv | @article{ng1997exemplar-based,
title={Exemplar-Based Word Sense Disambiguation: Some Recent Improvements},
author={Hwee Tou Ng (DSO National Laboratories)},
journal={In Proceedings of the Second Conference on Empirical Methods in
Natural Language Processing (EMNLP-2), August 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706010},
primaryClass={cmp-lg cs.CL}
} | ng1997exemplar-based |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.