corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-669001 | cmp-lg/9706011 | Applying Reliability Metrics to Co-Reference Annotation | <|reference_start|>Applying Reliability Metrics to Co-Reference Annotation: Studies of the contextual and linguistic factors that constrain discourse phenomena such as reference are coming to depend increasingly on annotated language corpora. In preparing the corpora, it is important to evaluate the reliability of the annotation, but methods for doing so have not been readily available. In this report, I present a method for computing reliability of coreference annotation. First I review a method for applying the information retrieval metrics of recall and precision to coreference annotation proposed by Marc Vilain and his collaborators. I show how this method makes it possible to construct contingency tables for computing Cohen's Kappa, a familiar reliability metric. By comparing recall and precision to reliability on the same data sets, I also show that recall and precision can be misleadingly high. Because Kappa factors out chance agreement among coders, it is a preferable measure for developing annotated corpora where no pre-existing target annotation exists.<|reference_end|> | arxiv | @article{passonneau1997applying,
title={Applying Reliability Metrics to Co-Reference Annotation},
author={Rebecca J. Passonneau},
journal={arXiv preprint arXiv:cmp-lg/9706011},
year={1997},
number={CUCS-017-97},
archivePrefix={arXiv},
eprint={cmp-lg/9706011},
primaryClass={cmp-lg cs.CL}
} | passonneau1997applying |
arxiv-669002 | cmp-lg/9706012 | Probabilistic Coreference in Information Extraction | <|reference_start|>Probabilistic Coreference in Information Extraction: Certain applications require that the output of an information extraction system be probabilistic, so that a downstream system can reliably fuse the output with possibly contradictory information from other sources. In this paper we consider the problem of assigning a probability distribution to alternative sets of coreference relationships among entity descriptions. We present the results of initial experiments with several approaches to estimating such distributions in an application using SRI's FASTUS information extraction system.<|reference_end|> | arxiv | @article{kehler1997probabilistic,
title={Probabilistic Coreference in Information Extraction},
author={Andrew Kehler (SRI International)},
journal={Proceedings of the Second Conference on Empirical Methods in NLP
(EMNLP-2), August 1-2, 1997, Providence, RI},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706012},
primaryClass={cmp-lg cs.CL}
} | kehler1997probabilistic |
arxiv-669003 | cmp-lg/9706013 | A Corpus-Based Approach for Building Semantic Lexicons | <|reference_start|>A Corpus-Based Approach for Building Semantic Lexicons: Semantic knowledge can be a great asset to natural language processing systems, but it is usually hand-coded for each application. Although some semantic information is available in general-purpose knowledge bases such as WordNet and Cyc, many applications require domain-specific lexicons that represent words and categories for a particular topic. In this paper, we present a corpus-based method that can be used to build semantic lexicons for specific categories. The input to the system is a small set of seed words for a category and a representative text corpus. The output is a ranked list of words that are associated with the category. A user then reviews the top-ranked words and decides which ones should be entered in the semantic lexicon. In experiments with five categories, users typically found about 60 words per category in 10-15 minutes to build a core semantic lexicon.<|reference_end|> | arxiv | @article{riloff1997a,
title={A Corpus-Based Approach for Building Semantic Lexicons},
author={Ellen Riloff and Jessica Shepherd (University of Utah)},
journal={arXiv preprint arXiv:cmp-lg/9706013},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706013},
primaryClass={cmp-lg cs.CL}
} | riloff1997a |
arxiv-669004 | cmp-lg/9706014 | A Linear Observed Time Statistical Parser Based on Maximum Entropy Models | <|reference_start|>A Linear Observed Time Statistical Parser Based on Maximum Entropy Models: This paper presents a statistical parser for natural language that obtains a parsing accuracy---roughly 87% precision and 86% recall---which surpasses the best previously published results on the Wall St. Journal domain. The parser itself requires very little human intervention, since the information it uses to make parsing decisions is specified in a concise and simple manner, and is combined in a fully automatic way under the maximum entropy framework. The observed running time of the parser on a test sentence is linear with respect to the sentence length. Furthermore, the parser returns several scored parses for a sentence, and this paper shows that a scheme to pick the best parse from the 20 highest scoring parses could yield a dramatically higher accuracy of 93% precision and recall.<|reference_end|> | arxiv | @article{ratnaparkhi1997a,
title={A Linear Observed Time Statistical Parser Based on Maximum Entropy
Models},
author={Adwait Ratnaparkhi (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9706014},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706014},
primaryClass={cmp-lg cs.CL}
} | ratnaparkhi1997a |
arxiv-669005 | cmp-lg/9706015 | Determining Internal and External Indices for Chart Generation | <|reference_start|>Determining Internal and External Indices for Chart Generation: This paper presents a compilation procedure which determines internal and external indices for signs in a unification based grammar to be used in improving the computational efficiency of lexicalist chart generation. The procedure takes as input a grammar and a set of feature paths indicating the position of semantic indices in a sign, and calculates the fixed-point of a set of equations derived from the grammar. The result is a set of independent constraints stating which indices in a sign can be bound to other signs within a complete sentence. Based on these constraints, two tests are formulated which reduce the search space during generation.<|reference_end|> | arxiv | @article{trujillo1997determining,
title={Determining Internal and External Indices for Chart Generation},
author={Arturo Trujillo (CCL, UMIST)},
journal={arXiv preprint arXiv:cmp-lg/9706015},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706015},
primaryClass={cmp-lg cs.CL}
} | trujillo1997determining |
arxiv-669006 | cmp-lg/9706016 | Text Segmentation Using Exponential Models | <|reference_start|>Text Segmentation Using Exponential Models: This paper introduces a new statistical approach to partitioning text automatically into coherent segments. Our approach enlists both short-range and long-range language models to help it sniff out likely sites of topic changes in text. To aid its search, the system consults a set of simple lexical hints it has learned to associate with the presence of boundaries through inspection of a large corpus of annotated data. We also propose a new probabilistically motivated error metric for use by the natural language processing and information retrieval communities, intended to supersede precision and recall for appraising segmentation algorithms. Qualitative assessment of our algorithm as well as evaluation using this new metric demonstrate the effectiveness of our approach in two very different domains, Wall Street Journal articles and the TDT Corpus, a collection of newswire articles and broadcast news transcripts.<|reference_end|> | arxiv | @article{beeferman1997text,
title={Text Segmentation Using Exponential Models},
author={Doug Beeferman, Adam Berger and John Lafferty (Carnegie Mellon)},
journal={arXiv preprint arXiv:cmp-lg/9706016},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706016},
primaryClass={cmp-lg cs.CL}
} | beeferman1997text |
arxiv-669007 | cmp-lg/9706017 | Name Searching and Information Retrieval | <|reference_start|>Name Searching and Information Retrieval: The main application of name searching has been name matching in a database of names. This paper discusses a different application: improving information retrieval through name recognition. It investigates name recognition accuracy, and the effect on retrieval performance of indexing and searching personal names differently from non-name terms in the context of ranked retrieval. The main conclusions are: that name recognition in text can be effective; that names occur frequently enough in a variety of domains, including those of legal documents and news databases, to make recognition worthwhile; and that retrieval performance can be improved using name searching.<|reference_end|> | arxiv | @article{thompson1997name,
title={Name Searching and Information Retrieval},
author={Paul Thompson and Christopher C. Dozier (West Group)},
journal={arXiv preprint arXiv:cmp-lg/9706017},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706017},
primaryClass={cmp-lg cs.CL}
} | thompson1997name |
arxiv-669008 | cmp-lg/9706018 | A Model of Lexical Attraction and Repulsion | <|reference_start|>A Model of Lexical Attraction and Repulsion: This paper introduces new methods based on exponential families for modeling the correlations between words in text and speech. While previous work assumed the effects of word co-occurrence statistics to be constant over a window of several hundred words, we show that their influence is nonstationary on a much smaller time scale. Empirical data drawn from English and Japanese text, as well as conversational speech, reveals that the ``attraction'' between words decays exponentially, while stylistic and syntactic contraints create a ``repulsion'' between words that discourages close co-occurrence. We show that these characteristics are well described by simple mixture models based on two-stage exponential distributions which can be trained using the EM algorithm. The resulting distance distributions can then be incorporated as penalizing features in an exponential language model.<|reference_end|> | arxiv | @article{beeferman1997a,
title={A Model of Lexical Attraction and Repulsion},
author={Doug Beeferman, Adam Berger and John Lafferty (Carnegie Mellon)},
journal={arXiv preprint arXiv:cmp-lg/9706018},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706018},
primaryClass={cmp-lg cs.CL}
} | beeferman1997a |
arxiv-669009 | cmp-lg/9706019 | Evaluating Competing Agent Strategies for a Voice Email Agent | <|reference_start|>Evaluating Competing Agent Strategies for a Voice Email Agent: This paper reports experimental results comparing a mixed-initiative to a system-initiative dialog strategy in the context of a personal voice email agent. To independently test the effects of dialog strategy and user expertise, users interact with either the system-initiative or the mixed-initiative agent to perform three successive tasks which are identical for both agents. We report performance comparisons across agent strategies as well as over tasks. This evaluation utilizes and tests the PARADISE evaluation framework, and discusses the performance function derivable from the experimental data.<|reference_end|> | arxiv | @article{walker1997evaluating,
title={Evaluating Competing Agent Strategies for a Voice Email Agent},
author={Marilyn Walker, Donald Hindle, Jeanne Fromer, Giuseppe Di Fabbrizio,
Craig Mestel (ATT Labs Research)},
journal={Proceedings of the European Conference on Speech Communication and
Technology, EUROSPEECH97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706019},
primaryClass={cmp-lg cs.CL}
} | walker1997evaluating |
arxiv-669010 | cmp-lg/9706020 | An Empirical Approach to Temporal Reference Resolution | <|reference_start|>An Empirical Approach to Temporal Reference Resolution: This paper presents the results of an empirical investigation of temporal reference resolution in scheduling dialogs. The algorithm adopted is primarily a linear-recency based approach that does not include a model of global focus. A fully automatic system has been developed and evaluated on unseen test data with good results. This paper presents the results of an intercoder reliability study, a model of temporal reference resolution that supports linear recency and has very good coverage, the results of the system evaluated on unseen test data, and a detailed analysis of the dialogs assessing the viability of the approach.<|reference_end|> | arxiv | @article{wiebe1997an,
title={An Empirical Approach to Temporal Reference Resolution},
author={Janyce Wiebe, Tom O'Hara, Kenneth McKeever, and Thorsten
Oehrstroem-Sandgren (New Mexico State University)},
journal={Proceedings of the Second Conference On Empirical Methods in
Natural Language Processing (EMNLP-2), August 1-2, 1997, Providence, RI},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706020},
primaryClass={cmp-lg cs.CL}
} | wiebe1997an |
arxiv-669011 | cmp-lg/9706021 | An Efficient Distribution of Labor in a Two Stage Robust Interpretation Process | <|reference_start|>An Efficient Distribution of Labor in a Two Stage Robust Interpretation Process: Although Minimum Distance Parsing (MDP) offers a theoretically attractive solution to the problem of extragrammaticality, it is often computationally infeasible in large scale practical applications. In this paper we present an alternative approach where the labor is distributed between a more restrictive partial parser and a repair module. Though two stage approaches have grown in popularity in recent years because of their efficiency, they have done so at the cost of requiring hand coded repair heuristics. In contrast, our two stage approach does not require any hand coded knowledge sources dedicated to repair, thus making it possible to achieve a similar run time advantage over MDP without losing the quality of domain independence.<|reference_end|> | arxiv | @article{rose'1997an,
title={An Efficient Distribution of Labor in a Two Stage Robust Interpretation
Process},
author={Carolyn Penstien Rose' and Alon Lavie},
journal={arXiv preprint arXiv:cmp-lg/9706021},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706021},
primaryClass={cmp-lg cs.CL}
} | rose'1997an |
arxiv-669012 | cmp-lg/9706022 | Three Generative, Lexicalised Models for Statistical Parsing | <|reference_start|>Three Generative, Lexicalised Models for Statistical Parsing: In this paper we first propose a new statistical parsing model, which is a generative model of lexicalised context-free grammar. We then extend the model to include a probabilistic treatment of both subcategorisation and wh-movement. Results on Wall Street Journal text show that the parser performs at 88.1/87.5% constituent precision/recall, an average improvement of 2.3% over (Collins 96).<|reference_end|> | arxiv | @article{collins1997three,
title={Three Generative, Lexicalised Models for Statistical Parsing},
author={Michael Collins (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9706022},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706022},
primaryClass={cmp-lg cs.CL}
} | collins1997three |
arxiv-669013 | cmp-lg/9706023 | An Information Extraction Core System for Real World German Text Processing | <|reference_start|>An Information Extraction Core System for Real World German Text Processing: This paper describes SMES, an information extraction core system for real world German text processing. The basic design criterion of the system is of providing a set of basic powerful, robust, and efficient natural language components and generic linguistic knowledge sources which can easily be customized for processing different tasks in a flexible manner.<|reference_end|> | arxiv | @article{neumann1997an,
title={An Information Extraction Core System for Real World German Text
Processing},
author={G.Neumann, R. Backofen, J.Baur, M. Becker, C. Braun (DFKI GmbH)},
journal={arXiv preprint arXiv:cmp-lg/9706023},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706023},
primaryClass={cmp-lg cs.CL}
} | neumann1997an |
arxiv-669014 | cmp-lg/9706024 | A Lexicalist Approach to the Translation of Colloquial Text | <|reference_start|>A Lexicalist Approach to the Translation of Colloquial Text: Colloquial English (CE) as found in television programs or typical conversations is different than text found in technical manuals, newspapers and books. Phrases tend to be shorter and less sophisticated. In this paper, we look at some of the theoretical and implementational issues involved in translating CE. We present a fully automatic large-scale multilingual natural language processing system for translation of CE input text, as found in the commercially transmitted closed-caption television signal, into simple target sentences. Our approach is based on the Whitelock's Shake and Bake machine translation paradigm, which relies heavily on lexical resources. The system currently translates from English to Spanish with the translation modules for Brazilian Portuguese under development.<|reference_end|> | arxiv | @article{popowich1997a,
title={A Lexicalist Approach to the Translation of Colloquial Text},
author={Fred Popowich, Davide Turcato, Olivier Laurens, Paul McFetridge, J.
Devlan Nicholson, Patrick McGivern, Maricela Corzo Pena, Lisa Pidruchney, and
Scott MacDonald (Simon Fraser University, Burnaby, Canada; TCC
Communications, Victoria, Canada)},
journal={Proceedings of the 7th International Conference on Theoretical
Issues in Machine Translation (TMI '97), Santa Fe, NM, 23-25 July 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706024},
primaryClass={cmp-lg cs.CL}
} | popowich1997a |
arxiv-669015 | cmp-lg/9706025 | A Portable Algorithm for Mapping Bitext Correspondence | <|reference_start|>A Portable Algorithm for Mapping Bitext Correspondence: The first step in most empirical work in multilingual NLP is to construct maps of the correspondence between texts and their translations ({\bf bitext maps}). The Smooth Injective Map Recognizer (SIMR) algorithm presented here is a generic pattern recognition algorithm that is particularly well-suited to mapping bitext correspondence. SIMR is faster and significantly more accurate than other algorithms in the literature. The algorithm is robust enough to use on noisy texts, such as those resulting from OCR input, and on translations that are not very literal. SIMR encapsulates its language-specific heuristics, so that it can be ported to any language pair with a minimal effort.<|reference_end|> | arxiv | @article{melamed1997a,
title={A Portable Algorithm for Mapping Bitext Correspondence},
author={I. Dan Melamed (University of Pennsylvania)},
journal={Proceedings of the 35th Conference of the Association for
Computational Linguistics (ACL'97), Madrid, Spain, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706025},
primaryClass={cmp-lg cs.CL}
} | melamed1997a |
arxiv-669016 | cmp-lg/9706026 | A Word-to-Word Model of Translational Equivalence | <|reference_start|>A Word-to-Word Model of Translational Equivalence: Many multilingual NLP applications need to translate words between different languages, but cannot afford the computational expense of inducing or applying a full translation model. For these applications, we have designed a fast algorithm for estimating a partial translation model, which accounts for translational equivalence only at the word level. The model's precision/recall trade-off can be directly controlled via one threshold parameter. This feature makes the model more suitable for applications that are not fully statistical. The model's hidden parameters can be easily conditioned on information extrinsic to the model, providing an easy way to integrate pre-existing knowledge such as part-of-speech, dictionaries, word order, etc.. Our model can link word tokens in parallel texts as well as other translation models in the literature. Unlike other translation models, it can automatically produce dictionary-sized translation lexicons, and it can do so with over 99% accuracy.<|reference_end|> | arxiv | @article{melamed1997a,
title={A Word-to-Word Model of Translational Equivalence},
author={I. Dan Melamed (University of Pennsylvania)},
journal={Proceedings of the 35th Conference of the Association for
Computational Linguistics (ACL'97), Madrid, Spain, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706026},
primaryClass={cmp-lg cs.CL}
} | melamed1997a |
arxiv-669017 | cmp-lg/9706027 | Automatic Discovery of Non-Compositional Compounds in Parallel Data | <|reference_start|>Automatic Discovery of Non-Compositional Compounds in Parallel Data: Automatic segmentation of text into minimal content-bearing units is an unsolved problem even for languages like English. Spaces between words offer an easy first approximation, but this approximation is not good enough for machine translation (MT), where many word sequences are not translated word-for-word. This paper presents an efficient automatic method for discovering sequences of words that are translated as a unit. The method proceeds by comparing pairs of statistical translation models induced from parallel texts in two languages. It can discover hundreds of non-compositional compounds on each iteration, and constructs longer compounds out of shorter ones. Objective evaluation on a simple machine translation task has shown the method's potential to improve the quality of MT output. The method makes few assumptions about the data, so it can be applied to parallel data other than parallel texts, such as word spellings and pronunciations.<|reference_end|> | arxiv | @article{melamed1997automatic,
title={Automatic Discovery of Non-Compositional Compounds in Parallel Data},
author={I. Dan Melamed (University of Pennsylvania)},
journal={Proceedings of the 2nd Conference on Empirical Methods in Natural
Language Processing (EMNLP'97), Providence, RI, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706027},
primaryClass={cmp-lg cs.CL}
} | melamed1997automatic |
arxiv-669018 | cmp-lg/9706028 | Efficient Construction of Underspecified Semantics under Massive Ambiguity | <|reference_start|>Efficient Construction of Underspecified Semantics under Massive Ambiguity: We investigate the problem of determining a compact underspecified semantical representation for sentences that may be highly ambiguous. Due to combinatorial explosion, the naive method of building semantics for the different syntactic readings independently is prohibitive. We present a method that takes as input a syntactic parse forest with associated constraint-based semantic construction rules and directly builds a packed semantic structure. The algorithm is fully implemented and runs in $O(n^4 log(n))$ in sentence length, if the grammar meets some reasonable `normality' restrictions.<|reference_end|> | arxiv | @article{doerre1997efficient,
title={Efficient Construction of Underspecified Semantics under Massive
Ambiguity},
author={Jochen Doerre (Univ. Stuttgart)},
journal={Proceedings of ACL/EACL'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9706028},
primaryClass={cmp-lg cs.CL}
} | doerre1997efficient |
arxiv-669019 | cmp-lg/9706029 | Learning Parse and Translation Decisions From Examples With Rich Context | <|reference_start|>Learning Parse and Translation Decisions From Examples With Rich Context: We propose a system for parsing and translating natural language that learns from examples and uses some background knowledge. As our parsing model we choose a deterministic shift-reduce type parser that integrates part-of-speech tagging and syntactic and semantic processing. Applying machine learning techniques, the system uses parse action examples acquired under supervision to generate a parser in the form of a decision structure, a generalization of decision trees. To learn good parsing and translation decisions, our system relies heavily on context, as encoded in currently 205 features describing the morphological, syntactical and semantical aspects of a given parse state. Compared with recent probabilistic systems that were trained on 40,000 sentences, our system relies on more background knowledge and a deeper analysis, but radically fewer examples, currently 256 sentences. We test our parser on lexically limited sentences from the Wall Street Journal and achieve accuracy rates of 89.8% for labeled precision, 98.4% for part of speech tagging and 56.3% of test sentences without any crossing brackets. Machine translations of 32 Wall Street Journal sentences to German have been evaluated by 10 bilingual volunteers and been graded as 2.4 on a 1.0 (best) to 6.0 (worst) scale for both grammatical correctness and meaning preservation.<|reference_end|> | arxiv | @article{hermjakob1997learning,
title={Learning Parse and Translation Decisions From Examples With Rich Context},
author={Ulf Hermjakob (Dept. of Computer Sciences, University of Texas at
Austin)},
journal={arXiv preprint arXiv:cmp-lg/9706029},
year={1997},
number={TR 97-12},
archivePrefix={arXiv},
eprint={cmp-lg/9706029},
primaryClass={cmp-lg cs.CL}
} | hermjakob1997learning |
arxiv-669020 | cmp-lg/9707001 | Reluctant Paraphrase: Textual Restructuring under an Optimisation Model | <|reference_start|>Reluctant Paraphrase: Textual Restructuring under an Optimisation Model: This paper develops a computational model of paraphrase under which text modification is carried out reluctantly; that is, there are external constraints, such as length or readability, on an otherwise ideal text, and modifications to the text are necessary to ensure conformance to these constraints. This problem is analogous to a mathematical optimisation problem: the textual constraints can be described as a set of constraint equations, and the requirement for minimal change to the text can be expressed as a function to be minimised; so techniques from this domain can be used to solve the problem. The work is done as part of a computational paraphrase system using the XTAG system as a base. The paper will present a theoretical computational framework for working within the Reluctant Paraphrase paradigm: three types of textual constraints are specified, effects of paraphrase on text are described, and a model incorporating mathematical optimisation techniques is outlined.<|reference_end|> | arxiv | @article{dras1997reluctant,
title={Reluctant Paraphrase: Textual Restructuring under an Optimisation Model},
author={Mark Dras (Microsoft Research Institute, Macquarie University)},
journal={arXiv preprint arXiv:cmp-lg/9707001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707001},
primaryClass={cmp-lg cs.CL}
} | dras1997reluctant |
arxiv-669021 | cmp-lg/9707002 | Automatic Detection of Text Genre | <|reference_start|>Automatic Detection of Text Genre: As the text databases available to users become larger and more heterogeneous, genre becomes increasingly important for computational linguistics as a complement to topical and structural principles of classification. We propose a theory of genres as bundles of facets, which correlate with various surface cues, and argue that genre detection based on surface cues is as successful as detection based on deeper structural properties.<|reference_end|> | arxiv | @article{kessler1997automatic,
title={Automatic Detection of Text Genre},
author={Brett Kessler, Geoffrey Nunberg, Hinrich Schuetze (Xerox PARC and
Stanford University)},
journal={Proceedings ACL/EACL 1997, Madrid, p. 32-38},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707002},
primaryClass={cmp-lg cs.CL}
} | kessler1997automatic |
arxiv-669022 | cmp-lg/9707003 | A Flexible POS tagger Using an Automatically Acquired Language Model | <|reference_start|>A Flexible POS tagger Using an Automatically Acquired Language Model: We present an algorithm that automatically learns context constraints using statistical decision trees. We then use the acquired constraints in a flexible POS tagger. The tagger is able to use information of any degree: n-grams, automatically learned context constraints, linguistically motivated manually written constraints, etc. The sources and kinds of constraints are unrestricted, and the language model can be easily extended, improving the results. The tagger has been tested and evaluated on the WSJ corpus.<|reference_end|> | arxiv | @article{marquez1997a,
title={A Flexible POS tagger Using an Automatically Acquired Language Model},
author={Lluis Marquez and Lluis Padro},
journal={Proceedings of EACL/ACL 1997, Madrid, Spain},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707003},
primaryClass={cmp-lg cs.CL}
} | marquez1997a |
arxiv-669023 | cmp-lg/9707004 | Discourse Preferences in Dynamic Logic | <|reference_start|>Discourse Preferences in Dynamic Logic: In order to enrich dynamic semantic theories with a `pragmatic' capacity, we combine dynamic and nonmonotonic (preferential) logics in a modal logic setting. We extend a fragment of Van Benthem and De Rijke's dynamic modal logic with additional preferential operators in the underlying static logic, which enables us to define defeasible (pragmatic) entailments over a given piece of discourse. We will show how this setting can be used for a dynamic logical analysis of preferential resolutions of ambiguous pronouns in discourse.<|reference_end|> | arxiv | @article{jaspars1997discourse,
title={Discourse Preferences in Dynamic Logic},
author={Jan Jaspars (CWI) and Megumi Kameyama (SRI International)},
journal={arXiv preprint arXiv:cmp-lg/9707004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707004},
primaryClass={cmp-lg cs.CL}
} | jaspars1997discourse |
arxiv-669024 | cmp-lg/9707005 | Intrasentential Centering: A Case Study | <|reference_start|>Intrasentential Centering: A Case Study: One of the necessary extensions to the centering model is a mechanism to handle pronouns with intrasentential antecedents. Existing centering models deal only with discourses consisting of simple sentences. It leaves unclear how to delimit center-updating utterance units and how to process complex utterances consisting of multiple clauses. In this paper, I will explore the extent to which a straightforward extension of an existing intersentential centering model contributes to this effect. I will motivate an approach that breaks a complex sentence into a hierarchy of center-updating units and proposes the preferred interpretation of a pronoun in its local context arbitrarily deep in the given sentence structure. This approach will be substantiated with examples from naturally occurring written discourses.<|reference_end|> | arxiv | @article{kameyama1997intrasentential,
title={Intrasentential Centering: A Case Study},
author={Megumi Kameyama (SRI International)},
journal={arXiv preprint arXiv:cmp-lg/9707005},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707005},
primaryClass={cmp-lg cs.CL}
} | kameyama1997intrasentential |
arxiv-669025 | cmp-lg/9707006 | Finite State Transducers Approximating Hidden Markov Models | <|reference_start|>Finite State Transducers Approximating Hidden Markov Models: This paper describes the conversion of a Hidden Markov Model into a sequential transducer that closely approximates the behavior of the stochastic model. This transformation is especially advantageous for part-of-speech tagging because the resulting transducer can be composed with other transducers that encode correction rules for the most frequent tagging errors. The speed of tagging is also improved. The described methods have been implemented and successfully tested on six languages.<|reference_end|> | arxiv | @article{kempe1997finite,
title={Finite State Transducers Approximating Hidden Markov Models},
author={Andre Kempe (Rank Xerox Research Centre, Grenoble Laboratory, France)},
journal={ACL'97, pp.460-467, Madrid, Spain. July 10, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707006},
primaryClass={cmp-lg cs.CL}
} | kempe1997finite |
arxiv-669026 | cmp-lg/9707007 | Tailored Patient Information: Some Issues and Questions | <|reference_start|>Tailored Patient Information: Some Issues and Questions: Tailored patient information (TPI) systems are computer programs which produce personalised heath-information material for patients. TPI systems are of growing interest to the natural-language generation (NLG) community; many TPI systems have also been developed in the medical community, usually with mail-merge technology. No matter what technology is used, experience shows that it is not easy to field a TPI system, even if it is shown to be effective in clinical trials. In this paper we discuss some of the difficulties in fielding TPI systems. This is based on our experiences with 2 TPI systems, one for generating asthma-information booklets and one for generating smoking-cessation letters.<|reference_end|> | arxiv | @article{reiter1997tailored,
title={Tailored Patient Information: Some Issues and Questions},
author={Ehud Reiter (Univ of Aberdeen, CS) and Liesl Osman (Univ of Aberdeen,
Medicine)},
journal={Proceedings of the 1997 ACL Workshop on From Research to
Commercial Applications: Making NLP Work in Practice},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707007},
primaryClass={cmp-lg cs.CL}
} | reiter1997tailored |
arxiv-669027 | cmp-lg/9707008 | Stressed and Unstressed Pronouns: Complementary Preferences | <|reference_start|>Stressed and Unstressed Pronouns: Complementary Preferences: I present a unified account of interpretation preferences of stressed and unstressed pronouns in discourse. The central intuition is the Complementary Preference Hypothesis that predicts the interpretation preference of a stressed pronoun from that of an unstressed pronoun in the same discourse position. The base preference must be computed in a total pragmatics module including commonsense preferences. The focus constraint in Rooth's theory of semantic focus is interpreted to be the salient subset of the domain in the local attentional state in the discourse context independently motivated for other purposes in Centering Theory.<|reference_end|> | arxiv | @article{kameyama1997stressed,
title={Stressed and Unstressed Pronouns: Complementary Preferences},
author={Megumi Kameyama},
journal={arXiv preprint arXiv:cmp-lg/9707008},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707008},
primaryClass={cmp-lg cs.CL}
} | kameyama1997stressed |
arxiv-669028 | cmp-lg/9707009 | Recognizing Referential Links: An Information Extraction Perspective | <|reference_start|>Recognizing Referential Links: An Information Extraction Perspective: We present an efficient and robust reference resolution algorithm in an end-to-end state-of-the-art information extraction system, which must work with a considerably impoverished syntactic analysis of the input sentences. Considering this disadvantage, the basic setup to collect, filter, then order by salience does remarkably well with third-person pronouns, but needs more semantic and discourse information to improve the treatments of other expression types.<|reference_end|> | arxiv | @article{kameyama1997recognizing,
title={Recognizing Referential Links: An Information Extraction Perspective},
author={Megumi Kameyama (SRI International)},
journal={In Mitkov, R. and B. Boguraev, eds., Proceedings of ACL/EACL
Workshop on Operational Factors in Practical, Robust Anaphora Resolution for
Unrestricted Texts, Madrid, July 1997, pages 46-53.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707009},
primaryClass={cmp-lg cs.CL}
} | kameyama1997recognizing |
arxiv-669029 | cmp-lg/9707010 | Experiences with the GTU grammar development environment | <|reference_start|>Experiences with the GTU grammar development environment: In this paper we describe our experiences with a tool for the development and testing of natural language grammars called GTU (German: Grammatik-Testumgebumg; grammar test environment). GTU supports four grammar formalisms under a window-oriented user interface. Additionally, it contains a set of German test sentences covering various syntactic phenomena as well as three types of German lexicons that can be attached to a grammar via an integrated lexicon interface. What follows is a description of the experiences we gained when we used GTU as a tutoring tool for students and as an experimental tool for CL researchers. From these we will derive the features necessary for a future grammar workbench.<|reference_end|> | arxiv | @article{volk1997experiences,
title={Experiences with the GTU grammar development environment},
author={Martin Volk (University of Zurich, Switzerland) and Dirk Richarz
(University of Koblenz-Landau, Germany)},
journal={Proceedings of Workshop on Computational Environments For Grammar
Development And Linguistic Engineering at the ACL/EACL Joint Conference 1997,
107-113},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707010},
primaryClass={cmp-lg cs.CL}
} | volk1997experiences |
arxiv-669030 | cmp-lg/9707011 | A lexical database tool for quantitative phonological research | <|reference_start|>A lexical database tool for quantitative phonological research: A lexical database tool tailored for phonological research is described. Database fields include transcriptions, glosses and hyperlinks to speech files. Database queries are expressed using HTML forms, and these permit regular expression search on any combination of fields. Regular expressions are passed directly to a Perl CGI program, enabling the full flexibility of Perl extended regular expressions. The regular expression notation is extended to better support phonological searches, such as search for minimal pairs. Search results are presented in the form of HTML or LaTeX tables, where each cell is either a number (representing frequency) or a designated subset of the fields. Tables have up to four dimensions, with an elegant system for specifying which fragments of which fields should be used for the row/column labels. The tool offers several advantages over traditional methods of analysis: (i) it supports a quantitative method of doing phonological research; (ii) it gives universal access to the same set of informants; (iii) it enables other researchers to hear the original speech data without having to rely on published transcriptions; (iv) it makes the full power of regular expression search available, and search results are full multimedia documents; and (v) it enables the early refutation of false hypotheses, shortening the analysis-hypothesis-test loop. A life-size application to an African tone language (Dschang) is used for exemplification throughout the paper. The database contains 2200 records, each with approximately 15 fields. Running on a PC laptop with a stand-alone web server, the `Dschang HyperLexicon' has already been used extensively in phonological fieldwork and analysis in Cameroon.<|reference_end|> | arxiv | @article{bird1997a,
title={A lexical database tool for quantitative phonological research},
author={Steven Bird (University of Edinburgh)},
journal={Proceedings of the Third Meeting of the ACL Special Interest Group
in Computational Phonology, pp. 33-39, Madrid, July 1997. ACL},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707011},
primaryClass={cmp-lg cs.CL}
} | bird1997a |
arxiv-669031 | cmp-lg/9707012 | Adjunction As Substitution: An Algebraic Formulation of Regular, Context-Free and Tree Adjoining Languages | <|reference_start|>Adjunction As Substitution: An Algebraic Formulation of Regular, Context-Free and Tree Adjoining Languages: This note presents a method of interpreting the tree adjoining languages as the natural third step in a hierarchy that starts with the regular and the context-free languages. The central notion in this account is that of a higher-order substitution. Whereas in traditional presentations of rule systems for abstract language families the emphasis has been on a first-order substitution process in which auxiliary variables are replaced by elements of the carrier of the proper algebra - concatenations of terminal and auxiliary category symbols in the string case - we lift this process to the level of operations defined on the elements of the carrier of the algebra. Our own view is that this change of emphasis provides the adequate platform for a better understanding of the operation of adjunction. To put it in a nutshell: Adjoining is not a first-order, but a second-order substitution operation.<|reference_end|> | arxiv | @article{moennich1997adjunction,
title={Adjunction As Substitution: An Algebraic Formulation of Regular,
Context-Free and Tree Adjoining Languages},
author={Uwe Moennich (Tuebingen University/SfS)},
journal={arXiv preprint arXiv:cmp-lg/9707012},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707012},
primaryClass={cmp-lg cs.CL}
} | moennich1997adjunction |
arxiv-669032 | cmp-lg/9707013 | On Cloning Context-Freeness | <|reference_start|>On Cloning Context-Freeness: To Rogers (1994) we owe the insight that monadic second order predicate logic with multiple successors (MSO) is well suited in many respects as a realistic formal base for syntactic theorizing. However, the agreeable formal properties of this logic come at a cost: MSO is equivalent with the class of regular tree automata/grammars, and, thereby, with the class of context-free languages. This paper outlines one approach towards a solution of MSO's expressivity problem. On the background of an algebraically refined Chomsky hierarchy, which allows the definition of several classes of languages--in particular, a whole hierarchy between CF and CS--via regular tree grammars over unambiguously derivable alphabets of varying complexity plus their respective yield-functions, it shows that not only some non-context-free string languages can be captured by context-free means in this way, but that this approach can be generalized to the corresponding structures. I.e., non-recognizable sets of structures can--up to homomorphism--be coded context-freely. Since the class of languages covered--Fischer's (1968} OI family of indexed languages--includes all attested instances of non-context-freeness in natural language, there exists an indirect, to be sure, but completely general way to formally describe the natural languages using a weak framework like MSO.<|reference_end|> | arxiv | @article{moennich1997on,
title={On Cloning Context-Freeness},
author={Uwe Moennich (Tuebingen University/SfS)},
journal={arXiv preprint arXiv:cmp-lg/9707013},
year={1997},
number={Arbeitspapiere des SFB 340, Bericht Nr. 114},
archivePrefix={arXiv},
eprint={cmp-lg/9707013},
primaryClass={cmp-lg cs.CL}
} | moennich1997on |
arxiv-669033 | cmp-lg/9707014 | Towards a PURE Spoken Dialogue System for Information Access | <|reference_start|>Towards a PURE Spoken Dialogue System for Information Access: With the rapid explosion of the World Wide Web, it is becoming increasingly possible to easily acquire a wide variety of information such as flight schedules, yellow pages, used car prices, current stock prices, entertainment event schedules, account balances, etc. It would be very useful to have spoken dialogue interfaces for such information access tasks. We identify portability, usability, robustness, and extensibility as the four primary design objectives for such systems. In other words, the objective is to develop a PURE (Portable, Usable, Robust, Extensible) system. A two-layered dialogue architecture for spoken dialogue systems is presented where the upper layer is domain-independent and the lower layer is domain-specific. We are implementing this architecture in a mixed-initiative system that accesses flight arrival/departure information from the World Wide Web.<|reference_end|> | arxiv | @article{agarwal1997towards,
title={Towards a PURE Spoken Dialogue System for Information Access},
author={Rajeev Agarwal (Texas Instruments, Inc.)},
journal={Proceedings of the ACL/EACL Workshop on "Interactive Spoken Dialog
Systems: Bringing Speech and NLP Together in Real Applications," Madrid,
Spain, pp. 90-97, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707014},
primaryClass={cmp-lg cs.CL}
} | agarwal1997towards |
arxiv-669034 | cmp-lg/9707015 | Tagging Grammatical Functions | <|reference_start|>Tagging Grammatical Functions: This paper addresses issues in automated treebank construction. We show how standard part-of-speech tagging techniques extend to the more general problem of structural annotation, especially for determining grammatical functions and syntactic categories. Annotation is viewed as an interactive process where manual and automatic processing alternate. Efficiency and accuracy results are presented. We also discuss further automation steps.<|reference_end|> | arxiv | @article{brants1997tagging,
title={Tagging Grammatical Functions},
author={Thorsten Brants, Wojciech Skut, and Brigitte Krenn (Computational
Linguistics, Universitity of the Saarland, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9707015},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707015},
primaryClass={cmp-lg cs.CL}
} | brants1997tagging |
arxiv-669035 | cmp-lg/9707016 | On aligning trees | <|reference_start|>On aligning trees: The increasing availability of corpora annotated for linguistic structure prompts the question: if we have the same texts, annotated for phrase structure under two different schemes, to what extent do the annotations agree on structuring within the text? We suggest the term tree alignment to indicate the situation where two markup schemes choose to bracket off the same text elements. We propose a general method for determining agreement between two analyses. We then describe an efficient implementation, which is also modular in that the core of the implementation can be reused regardless of the format of markup used in the corpora. The output of the implementation on the Susanne and Penn treebank corpora is discussed.<|reference_end|> | arxiv | @article{calder1997on,
title={On aligning trees},
author={Jo Calder (University of Edinburgh Language Technology Group, Human
Communication Research Centre and Centre for Cognitive Science)},
journal={arXiv preprint arXiv:cmp-lg/9707016},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707016},
primaryClass={cmp-lg cs.CL}
} | calder1997on |
arxiv-669036 | cmp-lg/9707017 | Stochastic phonological grammars and acceptability | <|reference_start|>Stochastic phonological grammars and acceptability: In foundational works of generative phonology it is claimed that subjects can reliably discriminate between possible but non-occurring words and words that could not be English. In this paper we examine the use of a probabilistic phonological parser for words to model experimentally-obtained judgements of the acceptability of a set of nonsense words. We compared various methods of scoring the goodness of the parse as a predictor of acceptability. We found that the probability of the worst part is not the best score of acceptability, indicating that classical generative phonology and Optimality Theory miss an important fact, as these approaches do not recognise a mechanism by which the frequency of well-formed parts may ameliorate the unacceptability of low-frequency parts. We argue that probabilistic generative grammars are demonstrably a more psychologically realistic model of phonological competence than standard generative phonology or Optimality Theory.<|reference_end|> | arxiv | @article{coleman1997stochastic,
title={Stochastic phonological grammars and acceptability},
author={John Coleman and Janet Pierrehumbert},
journal={arXiv preprint arXiv:cmp-lg/9707017},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707017},
primaryClass={cmp-lg cs.CL}
} | coleman1997stochastic |
arxiv-669037 | cmp-lg/9707018 | Multilingual phonological analysis and speech synthesis | <|reference_start|>Multilingual phonological analysis and speech synthesis: We give an overview of multilingual speech synthesis using the IPOX system. The first part discusses work in progress for various languages: Tashlhit Berber, Urdu and Dutch. The second part discusses a multilingual phonological grammar, which can be adapted to a particular language by setting parameters and adding language-specific details.<|reference_end|> | arxiv | @article{coleman1997multilingual,
title={Multilingual phonological analysis and speech synthesis},
author={John Coleman, Arthur Dirksen, Sarmad Hussain and Juliette Waals},
journal={arXiv preprint arXiv:cmp-lg/9707018},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707018},
primaryClass={cmp-lg cs.CL}
} | coleman1997multilingual |
arxiv-669038 | cmp-lg/9707019 | Generating Coherent Messages in Real-time Decision Support: Exploiting Discourse Theory for Discourse Practice | <|reference_start|>Generating Coherent Messages in Real-time Decision Support: Exploiting Discourse Theory for Discourse Practice: This paper presents a message planner, TraumaGEN, that draws on rhetorical structure and discourse theory to address the problem of producing integrated messages from individual critiques, each of which is designed to achieve its own communicative goal. TraumaGEN takes into account the purpose of the messages, the situation in which the messages will be received, and the social role of the system.<|reference_end|> | arxiv | @article{carberry1997generating,
title={Generating Coherent Messages in Real-time Decision Support: Exploiting
Discourse Theory for Discourse Practice},
author={Sandra Carberry, Terrence Harvey (University of Delaware)},
journal={Proceedings of the 19th Annual Conference of the Cognitive Science
Society (1997)},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707019},
primaryClass={cmp-lg cs.CL}
} | carberry1997generating |
arxiv-669039 | cmp-lg/9707020 | A Czech Morphological Lexicon | <|reference_start|>A Czech Morphological Lexicon: In this paper, a treatment of Czech phonological rules in two-level morphology approach is described. First the possible phonological alternations in Czech are listed and then their treatment in a practical application of a Czech morphological lexicon.<|reference_end|> | arxiv | @article{skoumalova1997a,
title={A Czech Morphological Lexicon},
author={Hana Skoumalova (Charles University)},
journal={Proceedings of the Third Meeting of the ACL Special Interest Group
in Computational Phonology, pp. 41-47, Madrid, July 1997. ACL},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9707020},
primaryClass={cmp-lg cs.CL}
} | skoumalova1997a |
arxiv-669040 | cmp-lg/9708001 | Expectations in Incremental Discourse Processing | <|reference_start|>Expectations in Incremental Discourse Processing: The way in which discourse features express connections back to the previous discourse has been described in the literature in terms of adjoining at the right frontier of discourse structure. But this does not allow for discourse features that express expectations about what is to come in the subsequent discourse. After characterizing these expectations and their distribution in text, we show how an approach that makes use of substitution as well as adjoining on a suitably defined right frontier, can be used to both process expectations and constrain discouse processing in general.<|reference_end|> | arxiv | @article{cristea1997expectations,
title={Expectations in Incremental Discourse Processing},
author={Dan Cristea (University "A.I. Cuza", Iasi, Romania), Bonnie Lynn
Webber (University of Pennsylvania)},
journal={Proceedings 35th Annual ACL, Madrid - June 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708001},
primaryClass={cmp-lg cs.CL}
} | cristea1997expectations |
arxiv-669041 | cmp-lg/9708002 | Natural Language Generation in Healthcare: Brief Review | <|reference_start|>Natural Language Generation in Healthcare: Brief Review: Good communication is vital in healthcare, both among healthcare professionals, and between healthcare professionals and their patients. And well-written documents, describing and/or explaining the information in structured databases may be easier to comprehend, more edifying and even more convincing, than the structured data, even when presented in tabular or graphic form. Documents may be automatically generated from structured data, using techniques from the field of natural language generation. These techniques are concerned with how the content, organisation and language used in a document can be dynamically selected, depending on the audience and context. They have been used to generate health education materials, explanations and critiques in decision support systems, and medical reports and progress notes.<|reference_end|> | arxiv | @article{cawsey1997natural,
title={Natural Language Generation in Healthcare: Brief Review},
author={Alison J. Cawsey (Herriot-Watt University), Bonnie L. Webber
(University of Pennsylvania), and Ray B. Jones (University of Glasgow)},
journal={arXiv preprint arXiv:cmp-lg/9708002},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708002},
primaryClass={cmp-lg cs.CL}
} | cawsey1997natural |
arxiv-669042 | cmp-lg/9708003 | Structure and Ostension in the Interpretation of Discourse Deixis | <|reference_start|>Structure and Ostension in the Interpretation of Discourse Deixis: This paper examines demonstrative pronouns used as deictics to refer to the interpretation of one or more clauses. Although this usage is frowned upon in style manuals (for example Strunk and White (1959) state that ``This. The pronoun 'this', referring to the complete sense of a preceding sentence or clause, cannot always carry the load and so may produce an imprecise statement.''), it is nevertheless very common in written text. Handling this usage poses a problem for Natural Language Understanding systems. The solution I propose is based on distinguishing between what can be pointed to and what can be referred to by virtue of pointing. I argue that a restricted set of discourse segments yield what such demonstrative pronouns can point to and a restricted set of what Nunberg (1979) has called referring functions yield what they can refer to by virtue of that pointing.<|reference_end|> | arxiv | @article{webber1997structure,
title={Structure and Ostension in the Interpretation of Discourse Deixis},
author={Bonnie L. Webber (University of Pennsylvania)},
journal={Language and Cognitive Processes 6(2), May 1991, pp. 107-135},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708003},
primaryClass={cmp-lg cs.CL}
} | webber1997structure |
arxiv-669043 | cmp-lg/9708004 | Epistemic NP Modifiers | <|reference_start|>Epistemic NP Modifiers: The paper considers participles such as "unknown", "identified" and "unspecified", which in sentences such as "Solange is staying in an unknown hotel" have readings equivalent to an indirect question "Solange is staying in a hotel, and it is not known which hotel it is." We discuss phenomena including disambiguation of quantifier scope and a restriction on the set of determiners which allow the reading in question. Epistemic modifiers are analyzed in a DRT framework with file (information state) discourse referents. The proposed semantics uses a predication on files and discourse referents which is related to recent developments in dynamic modal predicate calculus. It is argued that a compositional DRT semantics must employ a semantic type of discourse referents, as opposed to just a type of individuals. A connection is developed between the scope effects of epistemic modifiers and the scope-disambiguating effect of "a certain".<|reference_end|> | arxiv | @article{abusch1997epistemic,
title={Epistemic NP Modifiers},
author={Dorit Abusch and Mats Rooth (IMS, University of Stuttgart)},
journal={arXiv preprint arXiv:cmp-lg/9708004},
year={1997},
number={Arbeitspapiere des SFB 340, Bericht Nr. 108, Juni 1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708004},
primaryClass={cmp-lg cs.CL}
} | abusch1997epistemic |
arxiv-669044 | cmp-lg/9708005 | Centering, Anaphora Resolution, and Discourse Structure | <|reference_start|>Centering, Anaphora Resolution, and Discourse Structure: Centering was formulated as a model of the relationship between attentional state, the form of referring expressions, and the coherence of an utterance within a discourse segment (Grosz, Joshi and Weinstein, 1986; Grosz, Joshi and Weinstein, 1995). In this chapter, I argue that the restriction of centering to operating within a discourse segment should be abandoned in order to integrate centering with a model of global discourse structure. The within-segment restriction causes three problems. The first problem is that centers are often continued over discourse segment boundaries with pronominal referring expressions whose form is identical to those that occur within a discourse segment. The second problem is that recent work has shown that listeners perceive segment boundaries at various levels of granularity. If centering models a universal processing phenomenon, it is implausible that each listener is using a different centering algorithm.The third issue is that even for utterances within a discourse segment, there are strong contrasts between utterances whose adjacent utterance within a segment is hierarchically recent and those whose adjacent utterance within a segment is linearly recent. This chapter argues that these problems can be eliminated by replacing Grosz and Sidner's stack model of attentional state with an alternate model, the cache model. I show how the cache model is easily integrated with the centering algorithm, and provide several types of data from naturally occurring discourses that support the proposed integrated model. Future work should provide additional support for these claims with an examination of a larger corpus of naturally occurring discourses.<|reference_end|> | arxiv | @article{walker1997centering,,
title={Centering, Anaphora Resolution, and Discourse Structure},
author={Marilyn A. Walker},
journal={Centering In Discourse, eds. Marilyn A. Walker, Aravind K. Joshi
and Ellen F. Prince, Oxford University Press, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708005},
primaryClass={cmp-lg cs.CL}
} | walker1997centering, |
arxiv-669045 | cmp-lg/9708006 | Global Thresholding and Multiple Pass Parsing | <|reference_start|>Global Thresholding and Multiple Pass Parsing: We present a variation on classic beam thresholding techniques that is up to an order of magnitude faster than the traditional method, at the same performance level. We also present a new thresholding technique, global thresholding, which, combined with the new beam thresholding, gives an additional factor of two improvement, and a novel technique, multiple pass parsing, that can be combined with the others to yield yet another 50% improvement. We use a new search algorithm to simultaneously optimize the thresholding parameters of the various algorithms.<|reference_end|> | arxiv | @article{goodman1997global,
title={Global Thresholding and Multiple Pass Parsing},
author={Joshua Goodman (Harvard University)},
journal={Proceedings of the Second Conference on Empirical Methods in
Natural Language Processing, 11-25},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708006},
primaryClass={cmp-lg cs.CL}
} | goodman1997global |
arxiv-669046 | cmp-lg/9708007 | A complexity measure for diachronic Chinese phonology | <|reference_start|>A complexity measure for diachronic Chinese phonology: This paper addresses the problem of deriving distance measures between parent and daughter languages with specific relevance to historical Chinese phonology. The diachronic relationship between the languages is modelled as a Probabilistic Finite State Automaton. The Minimum Message Length principle is then employed to find the complexity of this structure. The idea is that this measure is representative of the amount of dissimilarity between the two languages.<|reference_end|> | arxiv | @article{raman1997a,
title={A complexity measure for diachronic Chinese phonology},
author={Anand Raman (Comp Sc), John Newman (Linguistics and Second Language
Teaching) and Jon Patrick (Information Systems) (Massey University, New
Zealand)},
journal={arXiv preprint arXiv:cmp-lg/9708007},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708007},
primaryClass={cmp-lg cs.CL}
} | raman1997a |
arxiv-669047 | cmp-lg/9708008 | Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication | <|reference_start|>Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication: Valiant showed that Boolean matrix multiplication (BMM) can be used for CFG parsing. We prove a dual result: CFG parsers running in time $O(|G||w|^{3 - \myeps})$ on a grammar $G$ and a string $w$ can be used to multiply $m \times m$ Boolean matrices in time $O(m^{3 - \myeps/3})$. In the process we also provide a formal definition of parsing motivated by an informal notion due to Lang. Our result establishes one of the first limitations on general CFG parsing: a fast, practical CFG parser would yield a fast, practical BMM algorithm, which is not believed to exist.<|reference_end|> | arxiv | @article{lee1997fast,
title={Fast Context-Free Parsing Requires Fast Boolean Matrix Multiplication},
author={Lillian Lee (Cornell University)},
journal={Proceedings of the 35th ACL/8th EACL, pp 9-15},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708008},
primaryClass={cmp-lg cs.CL}
} | lee1997fast |
arxiv-669048 | cmp-lg/9708009 | DIA-MOLE: An Unsupervised Learning Approach to Adaptive Dialogue Models for Spoken Dialogue Systems | <|reference_start|>DIA-MOLE: An Unsupervised Learning Approach to Adaptive Dialogue Models for Spoken Dialogue Systems: The DIAlogue MOdel Learning Environment supports an engineering-oriented approach towards dialogue modelling for a spoken-language interface. Major steps towards dialogue models is to know about the basic units that are used to construct a dialogue model and possible sequences. In difference to many other approaches a set of dialogue acts is not predefined by any theory or manually during the engineering process, but is learned from data that are available in an avised spoken dialogue system. The architecture is outlined and the approach is applied to the domain of appointment scheduling. Even though based on a word correctness of about 70% predictability of dialogue acts in DIA-MOLE turns out to be comparable to human-assigned dialogue acts.<|reference_end|> | arxiv | @article{moeller1997dia-mole:,
title={DIA-MOLE: An Unsupervised Learning Approach to Adaptive Dialogue Models
for Spoken Dialogue Systems},
author={Jens-Uwe Moeller (Natural Language Systems Division, Dept. of Computer
Science, Univ. of Hamburg)},
journal={Proc. EUROSPEECH 97, Rhodes, Greek, 2271-2274},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708009},
primaryClass={cmp-lg cs.CL}
} | moeller1997dia-mole: |
arxiv-669049 | cmp-lg/9708010 | Similarity-Based Methods For Word Sense Disambiguation | <|reference_start|>Similarity-Based Methods For Word Sense Disambiguation: We compare four similarity-based estimation methods against back-off and maximum-likelihood estimation methods on a pseudo-word sense disambiguation task in which we controlled for both unigram and bigram frequency. The similarity-based methods perform up to 40% better on this particular task. We also conclude that events that occur only once in the training set have major impact on similarity-based estimates.<|reference_end|> | arxiv | @article{dagan1997similarity-based,
title={Similarity-Based Methods For Word Sense Disambiguation},
author={Ido Dagan (Bar-Ilan University), Lillian Lee (Cornell University), and
Fernando Pereira (AT&T Labs -- Research)},
journal={Proceedings of the 35th ACL/8th EACL, pp 56--63},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708010},
primaryClass={cmp-lg cs.CL}
} | dagan1997similarity-based |
arxiv-669050 | cmp-lg/9708011 | Similarity-Based Approaches to Natural Language Processing | <|reference_start|>Similarity-Based Approaches to Natural Language Processing: This thesis presents two similarity-based approaches to sparse data problems. The first approach is to build soft, hierarchical clusters: soft, because each event belongs to each cluster with some probability; hierarchical, because cluster centroids are iteratively split to model finer distinctions. Our second approach is a nearest-neighbor approach: instead of calculating a centroid for each class, as in the hierarchical clustering approach, we in essence build a cluster around each word. We compare several such nearest-neighbor approaches on a word sense disambiguation task and find that as a whole, their performance is far superior to that of standard methods. In another set of experiments, we show that using estimation techniques based on the nearest-neighbor model enables us to achieve perplexity reductions of more than 20 percent over standard techniques in the prediction of low-frequency events, and statistically significant speech recognition error-rate reduction.<|reference_end|> | arxiv | @article{lee1997similarity-based,
title={Similarity-Based Approaches to Natural Language Processing},
author={Lillian Lee (Cornell University)},
journal={arXiv preprint arXiv:cmp-lg/9708011},
year={1997},
number={Harvard University Technical Report TR-11-97},
archivePrefix={arXiv},
eprint={cmp-lg/9708011},
primaryClass={cmp-lg cs.CL}
} | lee1997similarity-based |
arxiv-669051 | cmp-lg/9708012 | Encoding Frequency Information in Lexicalized Grammars | <|reference_start|>Encoding Frequency Information in Lexicalized Grammars: We address the issue of how to associate frequency information with lexicalized grammar formalisms, using Lexicalized Tree Adjoining Grammar as a representative framework. We consider systematically a number of alternative probabilistic frameworks, evaluating their adequacy from both a theoretical and empirical perspective using data from existing large treebanks. We also propose three orthogonal approaches for backing off probability estimates to cope with the large number of parameters involved.<|reference_end|> | arxiv | @article{carroll1997encoding,
title={Encoding Frequency Information in Lexicalized Grammars},
author={John Carroll, David Weir (University of Sussex)},
journal={5th International Workshop on Parsing Technologies (IWPT-97)},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708012},
primaryClass={cmp-lg cs.CL}
} | carroll1997encoding |
arxiv-669052 | cmp-lg/9708013 | explanation-based learning of data oriented parsing | <|reference_start|>explanation-based learning of data oriented parsing: This paper presents a new view of Explanation-Based Learning (EBL) of natural language parsing. Rather than employing EBL for specializing parsers by inferring new ones, this paper suggests employing EBL for learning how to reduce ambiguity only partially. The present method consists of an EBL algorithm for learning partial-parsers, and a parsing algorithm which combines partial-parsers with existing ``full-parsers". The learned partial-parsers, implementable as Cascades of Finite State Transducers (CFSTs), recognize and combine constituents efficiently, prohibiting spurious overgeneration. The parsing algorithm combines a learned partial-parser with a given full-parser such that the role of the full-parser is limited to combining the constituents, recognized by the partial-parser, and to recognizing unrecognized portions of the input sentence. Besides the reduction of the parse-space prior to disambiguation, the present method provides a way for refining existing disambiguation models that learn stochastic grammars from tree-banks. We exhibit encouraging empirical results using a pilot implementation: parse-space is reduced substantially with minimal loss of coverage. The speedup gain for disambiguation models is exemplified by experiments with the DOP model.<|reference_end|> | arxiv | @article{sima'an1997explanation-based,
title={explanation-based learning of data oriented parsing},
author={Khalil Sima'an (University of Utrecht)},
journal={arXiv preprint arXiv:cmp-lg/9708013},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9708013},
primaryClass={cmp-lg cs.CL}
} | sima'an1997explanation-based |
arxiv-669053 | cmp-lg/9709001 | The Complexity of Recognition of Linguistically Adequate Dependency Grammars | <|reference_start|>The Complexity of Recognition of Linguistically Adequate Dependency Grammars: Results of computational complexity exist for a wide range of phrase structure-based grammar formalisms, while there is an apparent lack of such results for dependency-based formalisms. We here adapt a result on the complexity of ID/LP-grammars to the dependency framework. Contrary to previous studies on heavily restricted dependency grammars, we prove that recognition (and thus, parsing) of linguistically adequate dependency grammars is NP-complete.<|reference_end|> | arxiv | @article{neuhaus1997the,
title={The Complexity of Recognition of Linguistically Adequate Dependency
Grammars},
author={Peter Neuhaus, Norbert Broeker (University of Freiburg)},
journal={Proc. ACL-EACL 1997, Madrid, Spain, pp.337-343},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709001},
primaryClass={cmp-lg cs.CL}
} | neuhaus1997the |
arxiv-669054 | cmp-lg/9709002 | Learning Methods for Combining Linguistic Indicators to Classify Verbs | <|reference_start|>Learning Methods for Combining Linguistic Indicators to Classify Verbs: Fourteen linguistically-motivated numerical indicators are evaluated for their ability to categorize verbs as either states or events. The values for each indicator are computed automatically across a corpus of text. To improve classification performance, machine learning techniques are employed to combine multiple indicators. Three machine learning methods are compared for this task: decision tree induction, a genetic algorithm, and log-linear regression.<|reference_end|> | arxiv | @article{siegel1997learning,
title={Learning Methods for Combining Linguistic Indicators to Classify Verbs},
author={Eric V. Siegel},
journal={arXiv preprint arXiv:cmp-lg/9709002},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709002},
primaryClass={cmp-lg cs.CL}
} | siegel1997learning |
arxiv-669055 | cmp-lg/9709003 | Combining Multiple Methods for the Automatic Construction of Multilingual WordNets | <|reference_start|>Combining Multiple Methods for the Automatic Construction of Multilingual WordNets: This paper explores the automatic construction of a multilingual Lexical Knowledge Base from preexisting lexical resources. First, a set of automatic and complementary techniques for linking Spanish words collected from monolingual and bilingual MRDs to English WordNet synsets are described. Second, we show how resulting data provided by each method is then combined to produce a preliminary version of a Spanish WordNet with an accuracy over 85%. The application of these combinations results on an increment of the extracted connexions of a 40% without losing accuracy. Both coarse-grained (class level) and fine-grained (synset assignment level) confidence ratios are used and evaluated. Finally, the results for the whole process are presented.<|reference_end|> | arxiv | @article{atserias1997combining,
title={Combining Multiple Methods for the Automatic Construction of
Multilingual WordNets},
author={Jordi Atserias (Universitat Politecnica de Catalunya), Salvador
Climent (Universitat de Barcelona), Xavier Farreres (Universitat Politecnica
de Catalunya), German Rigau (Universitat Politecnica de Catalunya), Horacio
Rodriguez (Universitat Politecnica de Catalunya)},
journal={RANLP'97 Bulgaria},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709003},
primaryClass={cmp-lg cs.CL}
} | atserias1997combining |
arxiv-669056 | cmp-lg/9709004 | Integrating a Lexical Database and a Training Collection for Text Categorization | <|reference_start|>Integrating a Lexical Database and a Training Collection for Text Categorization: Automatic text categorization is a complex and useful task for many natural language processing applications. Recent approaches to text categorization focus more on algorithms than on resources involved in this operation. In contrast to this trend, we present an approach based on the integration of widely available resources as lexical databases and training collections to overcome current limitations of the task. Our approach makes use of WordNet synonymy information to increase evidence for bad trained categories. When testing a direct categorization, a WordNet based one, a training algorithm, and our integrated approach, the latter exhibits a better perfomance than any of the others. Incidentally, WordNet based approach perfomance is comparable with the training approach one.<|reference_end|> | arxiv | @article{hidalgo1997integrating,
title={Integrating a Lexical Database and a Training Collection for Text
Categorization},
author={Jose Maria Gomez Hidalgo, Manuel de Buenaga Rodriguez},
journal={ACL/EACL Workshop on Automatic Extraction and Building of Lexical
Semantic Resources for Natural Language Applications, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709004},
primaryClass={cmp-lg cs.CL}
} | hidalgo1997integrating |
arxiv-669057 | cmp-lg/9709005 | A generation algorithm for f-structure representations | <|reference_start|>A generation algorithm for f-structure representations: This paper shows that previously reported generation algorithms run into problems when dealing with f-structure representations. A generation algorithm that is suitable for this type of representations is presented: the Semantic Kernel Generation (SKG) algorithm. The SKG method has the same processing strategy as the Semantic Head Driven generation (SHDG) algorithm and relies on the assumption that it is possible to compute the Semantic Kernel (SK) and non Semantic Kernel (Non-SK) information for each input structure.<|reference_end|> | arxiv | @article{tuells1997a,
title={A generation algorithm for f-structure representations},
author={Toni Tuells (Universitat Pompeu Fabra)},
journal={In the Proceedings of RANLP'97 (pages 270-275), Tzigov Chark,
Bulgaria, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709005},
primaryClass={cmp-lg cs.CL}
} | tuells1997a |
arxiv-669058 | cmp-lg/9709006 | Semantic Processing of Out-Of-Vocabulary Words in a Spoken Dialogue System | <|reference_start|>Semantic Processing of Out-Of-Vocabulary Words in a Spoken Dialogue System: One of the most important causes of failure in spoken dialogue systems is usually neglected: the problem of words that are not covered by the system's vocabulary (out-of-vocabulary or OOV words). In this paper a methodology is described for the detection, classification and processing of OOV words in an automatic train timetable information system. The various extensions that had to be effected on the different modules of the system are reported, resulting in the design of appropriate dialogue strategies, as are encouraging evaluation results on the new versions of the word recogniser and the linguistic processor.<|reference_end|> | arxiv | @article{boros1997semantic,
title={Semantic Processing of Out-Of-Vocabulary Words in a Spoken Dialogue
System},
author={Manuela Boros (FORWISS, Erlangen), Maria Aretoulaki, Florian Gallwitz,
Elmar Noeth, Heinrich Niemann (Chair for Pattern Recognition, University of
Erlangen)},
journal={Proceedings of EUROSPEECH'97, Vol.4, pp.1887-1890, Rhodes, Greece},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709006},
primaryClass={cmp-lg cs.CL}
} | boros1997semantic |
arxiv-669059 | cmp-lg/9709007 | Using WordNet to Complement Training Information in Text Categorization | <|reference_start|>Using WordNet to Complement Training Information in Text Categorization: Automatic Text Categorization (TC) is a complex and useful task for many natural language applications, and is usually performed through the use of a set of manually classified documents, a training collection. We suggest the utilization of additional resources like lexical databases to increase the amount of information that TC systems make use of, and thus, to improve their performance. Our approach integrates WordNet information with two training approaches through the Vector Space Model. The training approaches we test are the Rocchio (relevance feedback) and the Widrow-Hoff (machine learning) algorithms. Results obtained from evaluation show that the integration of WordNet clearly outperforms training approaches, and that an integrated technique can effectively address the classification of low frequency categories.<|reference_end|> | arxiv | @article{rodriguez1997using,
title={Using WordNet to Complement Training Information in Text Categorization},
author={Manuel de Buenaga Rodriguez, Jose Maria Gomez Hidalgo, Belen Diaz
Agudo},
journal={Second International Conference on Recent Advances in Natural
Language Processing, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709007},
primaryClass={cmp-lg cs.CL}
} | rodriguez1997using |
arxiv-669060 | cmp-lg/9709008 | Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy | <|reference_start|>Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy: This paper presents a new approach for measuring semantic similarity/distance between words and concepts. It combines a lexical taxonomy structure with corpus statistical information so that the semantic distance between nodes in the semantic space constructed by the taxonomy can be better quantified with the computational evidence derived from a distributional analysis of corpus data. Specifically, the proposed measure is a combined approach that inherits the edge-based approach of the edge counting scheme, which is then enhanced by the node-based approach of the information content calculation. When tested on a common data set of word pair similarity ratings, the proposed approach outperforms other computational models. It gives the highest correlation value (r = 0.828) with a benchmark based on human similarity judgements, whereas an upper bound (r = 0.885) is observed when human subjects replicate the same task.<|reference_end|> | arxiv | @article{jiang1997semantic,
title={Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy},
author={Jay J. Jiang (University of Waterloo), David W. Conrath (McMaster
University)},
journal={In the Proceedings of ROCLING X, Taiwan, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709008},
primaryClass={cmp-lg cs.CL}
} | jiang1997semantic |
arxiv-669061 | cmp-lg/9709009 | Evaluating Parsing Schemes with Entropy Indicators | <|reference_start|>Evaluating Parsing Schemes with Entropy Indicators: This paper introduces an objective metric for evaluating a parsing scheme. It is based on Shannon's original work with letter sequences, which can be extended to part-of-speech tag sequences. It is shown that this regular language is an inadequate model for natural language, but a representation is used that models language slightly higher in the Chomsky hierarchy. We show how the entropy of parsed and unparsed sentences can be measured. If the entropy of the parsed sentence is lower, this indicates that some of the structure of the language has been captured. We apply this entropy indicator to support one particular parsing scheme that effects a top down segmentation. This approach could be used to decompose the parsing task into computationally more tractable subtasks. It also lends itself to the extraction of predicate/argument structure.<|reference_end|> | arxiv | @article{lyon1997evaluating,
title={Evaluating Parsing Schemes with Entropy Indicators},
author={Caroline Lyon (Computer Science Department) and Stephen Brown
(Mathematics Department, University of Hertfordshire, UK)},
journal={5th Meeting on Mathematics of Language, MOL5, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709009},
primaryClass={cmp-lg cs.CL}
} | lyon1997evaluating |
arxiv-669062 | cmp-lg/9709010 | Message-Passing Protocols for Real-World Parsing -- An Object-Oriented Model and its Preliminary Evaluation | <|reference_start|>Message-Passing Protocols for Real-World Parsing -- An Object-Oriented Model and its Preliminary Evaluation: We argue for a performance-based design of natural language grammars and their associated parsers in order to meet the constraints imposed by real-world NLP. Our approach incorporates declarative and procedural knowledge about language and language use within an object-oriented specification framework. We discuss several message-passing protocols for parsing and provide reasons for sacrificing completeness of the parse in favor of efficiency based on a preliminary empirical evaluation.<|reference_end|> | arxiv | @article{hahn1997message-passing,
title={Message-Passing Protocols for Real-World Parsing -- An Object-Oriented
Model and its Preliminary Evaluation},
author={Udo Hahn, Peter Neuhaus (Computational Linguistics Lab, Freiburg
University), Norbert Broeker (Institute for Natural Language Processing,
Stuttgart University)},
journal={Proc. Int'l Workshop on Parsing Technologies, 1997, Boston/MA:
MIT, pp 101-112},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709010},
primaryClass={cmp-lg cs.CL}
} | hahn1997message-passing |
arxiv-669063 | cmp-lg/9709011 | Off-line Parsability and the Well-foundedness of Subsumption | <|reference_start|>Off-line Parsability and the Well-foundedness of Subsumption: Typed feature structures are used extensively for the specification of linguistic information in many formalisms. The subsumption relation orders TFSs by their information content. We prove that subsumption of acyclic TFSs is well-founded, whereas in the presence of cycles general TFS subsumption is not well-founded. We show an application of this result for parsing, where the well-foundedness of subsumption is used to guarantee termination for grammars that are off-line parsable. We define a new version of off-line parsability that is less strict than the existing one; thus termination is guaranteed for parsing with a larger set of grammars.<|reference_end|> | arxiv | @article{wintner1997off-line,
title={Off-line Parsability and the Well-foundedness of Subsumption},
author={Shuly Wintner and Nissim Francez (Department of Computer Science,
Technion, Israel Institute of Technology, Haifa, Israel)},
journal={arXiv preprint arXiv:cmp-lg/9709011},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709011},
primaryClass={cmp-lg cs.CL}
} | wintner1997off-line |
arxiv-669064 | cmp-lg/9709012 | Using Single Layer Networks for Discrete, Sequential Data: An Example from Natural Language Processing | <|reference_start|>Using Single Layer Networks for Discrete, Sequential Data: An Example from Natural Language Processing: A natural language parser which has been successfully implemented is described. This is a hybrid system, in which neural networks operate within a rule based framework. It can be accessed via telnet for users to try on their own text. (For details, contact the author.) Tested on technical manuals, the parser finds the subject and head of the subject in over 90% of declarative sentences. The neural processing components belong to the class of Generalized Single Layer Networks (GSLN). In general, supervised, feed-forward networks need more than one layer to process data. However, in some cases data can be pre-processed with a non-linear transformation, and then presented in a linearly separable form for subsequent processing by a single layer net. Such networks offer advantages of functional transparency and operational speed. For our parser, the initial stage of processing maps linguistic data onto a higher order representation, which can then be analysed by a single layer network. This transformation is supported by information theoretic analysis.<|reference_end|> | arxiv | @article{lyon1997using,
title={Using Single Layer Networks for Discrete, Sequential Data: An Example
from Natural Language Processing},
author={Caroline Lyon and Ray Frank (Computer Science Department, University
of Hertfordshire, UK)},
journal={Neural Computing and Applications 5(4), 1997, 196-214},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709012},
primaryClass={cmp-lg cs.CL}
} | lyon1997using |
arxiv-669065 | cmp-lg/9709013 | An Abstract Machine for Unification Grammars | <|reference_start|>An Abstract Machine for Unification Grammars: This work describes the design and implementation of an abstract machine, Amalia, for the linguistic formalism ALE, which is based on typed feature structures. This formalism is one of the most widely accepted in computational linguistics and has been used for designing grammars in various linguistic theories, most notably HPSG. Amalia is composed of data structures and a set of instructions, augmented by a compiler from the grammatical formalism to the abstract instructions, and a (portable) interpreter of the abstract instructions. The effect of each instruction is defined using a low-level language that can be executed on ordinary hardware. The advantages of the abstract machine approach are twofold. From a theoretical point of view, the abstract machine gives a well-defined operational semantics to the grammatical formalism. This ensures that grammars specified using our system are endowed with well defined meaning. It enables, for example, to formally verify the correctness of a compiler for HPSG, given an independent definition. From a practical point of view, Amalia is the first system that employs a direct compilation scheme for unification grammars that are based on typed feature structures. The use of amalia results in a much improved performance over existing systems. In order to test the machine on a realistic application, we have developed a small-scale, HPSG-based grammar for a fragment of the Hebrew language, using Amalia as the development platform. This is the first application of HPSG to a Semitic language.<|reference_end|> | arxiv | @article{wintner1997an,
title={An Abstract Machine for Unification Grammars},
author={Shuly Wintner (Department of Computer Science, the Technion, Haifa,
Israel)},
journal={arXiv preprint arXiv:cmp-lg/9709013},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709013},
primaryClass={cmp-lg cs.CL}
} | wintner1997an |
arxiv-669066 | cmp-lg/9709014 | Amalia -- A Unified Platform for Parsing and Generation | <|reference_start|>Amalia -- A Unified Platform for Parsing and Generation: Contemporary linguistic theories (in particular, HPSG) are declarative in nature: they specify constraints on permissible structures, not how such structures are to be computed. Grammars designed under such theories are, therefore, suitable for both parsing and generation. However, practical implementations of such theories don't usually support bidirectional processing of grammars. We present a grammar development system that includes a compiler of grammars (for parsing and generation) to abstract machine instructions, and an interpreter for the abstract machine language. The generation compiler inverts input grammars (designed for parsing) to a form more suitable for generation. The compiled grammars are then executed by the interpreter using one control strategy, regardless of whether the grammar is the original or the inverted version. We thus obtain a unified, efficient platform for developing reversible grammars.<|reference_end|> | arxiv | @article{wintner1997amalia,
title={Amalia -- A Unified Platform for Parsing and Generation},
author={Shuly Wintner (Seminar fuer Sprachwissenschaft, Tuebingen) Evgeniy
Gabrilovich and Nissim Francez (Laboratory for Computational Linguistics,
Technion, Israel)},
journal={Proceedings of Recent Advances in Natural Language Programming
(RANLP97), Tzigov Chark, Bulgaria, 11-13 September 1997, pp. 135-142},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709014},
primaryClass={cmp-lg cs.CL}
} | wintner1997amalia |
arxiv-669067 | cmp-lg/9709015 | Segmentation of Expository Texts by Hierarchical Agglomerative Clustering | <|reference_start|>Segmentation of Expository Texts by Hierarchical Agglomerative Clustering: We propose a method for segmentation of expository texts based on hierarchical agglomerative clustering. The method uses paragraphs as the basic segments for identifying hierarchical discourse structure in the text, applying lexical similarity between them as the proximity test. Linear segmentation can be induced from the identified structure through application of two simple rules. However the hierarchy can be used also for intelligent exploration of the text. The proposed segmentation algorithm is evaluated against an accepted linear segmentation method and shows comparable results.<|reference_end|> | arxiv | @article{yaari1997segmentation,
title={Segmentation of Expository Texts by Hierarchical Agglomerative
Clustering},
author={Yaakov Yaari (Bar Ilan University)},
journal={RANLP'97, Bulgaria},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9709015},
primaryClass={cmp-lg cs.CL}
} | yaari1997segmentation |
arxiv-669068 | cmp-lg/9710001 | Use of Weighted Finite State Transducers in Part of Speech Tagging | <|reference_start|>Use of Weighted Finite State Transducers in Part of Speech Tagging: This paper addresses issues in part of speech disambiguation using finite-state transducers and presents two main contributions to the field. One of them is the use of finite-state machines for part of speech tagging. Linguistic and statistical information is represented in terms of weights on transitions in weighted finite-state transducers. Another contribution is the successful combination of techniques -- linguistic and statistical -- for word disambiguation, compounded with the notion of word classes.<|reference_end|> | arxiv | @article{tzoukermann1997use,
title={Use of Weighted Finite State Transducers in Part of Speech Tagging},
author={Evelyne Tzoukermann (Bell Labs) and Dragomir R. Radev (Columbia
University)},
journal={arXiv preprint arXiv:cmp-lg/9710001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710001},
primaryClass={cmp-lg cs.CL}
} | tzoukermann1997use |
arxiv-669069 | cmp-lg/9710002 | Tagging French Without Lexical Probabilities -- Combining Linguistic Knowledge And Statistical Learning | <|reference_start|>Tagging French Without Lexical Probabilities -- Combining Linguistic Knowledge And Statistical Learning: This paper explores morpho-syntactic ambiguities for French to develop a strategy for part-of-speech disambiguation that a) reflects the complexity of French as an inflected language, b) optimizes the estimation of probabilities, c) allows the user flexibility in choosing a tagset. The problem in extracting lexical probabilities from a limited training corpus is that the statistical model may not necessarily represent the use of a particular word in a particular context. In a highly morphologically inflected language, this argument is particularly serious since a word can be tagged with a large number of parts of speech. Due to the lack of sufficient training data, we argue against estimating lexical probabilities to disambiguate parts of speech in unrestricted texts. Instead, we use the strength of contextual probabilities along with a feature we call ``genotype'', a set of tags associated with a word. Using this knowledge, we have built a part-of-speech tagger that combines linguistic and statistical approaches: contextual information is disambiguated by linguistic rules and n-gram probabilities on parts of speech only are estimated in order to disambiguate the remaining ambiguous tags.<|reference_end|> | arxiv | @article{tzoukermann1997tagging,
title={Tagging French Without Lexical Probabilities -- Combining Linguistic
Knowledge And Statistical Learning},
author={Evelyne Tzoukermann (AT&T Bell Labs) and Dragomir R. Radev (Columbia
University) and William A. Gale (AT&T Bell Labs)},
journal={arXiv preprint arXiv:cmp-lg/9710002},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710002},
primaryClass={cmp-lg cs.CL}
} | tzoukermann1997tagging |
arxiv-669070 | cmp-lg/9710003 | Disambiguating with Controlled Disjunctions | <|reference_start|>Disambiguating with Controlled Disjunctions: In this paper, we propose a disambiguating technique called controlled disjunctions. This extension of the so-called named disjunctions relies on the relations existing between feature values (covariation, control, etc.). We show that controlled disjunctions can implement different kind of ambiguities in a consistent and homogeneous way. We describe the integration of controlled disjunctions into a HPSG feature structure representation. Finally, we present a direct implementation by means of delayed evaluation and we develop an example within the functionnal programming paradigm.<|reference_end|> | arxiv | @article{blache1997disambiguating,
title={Disambiguating with Controlled Disjunctions},
author={Philippe Blache},
journal={Proceedings of IWPT'97},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710003},
primaryClass={cmp-lg cs.CL}
} | blache1997disambiguating |
arxiv-669071 | cmp-lg/9710004 | Parsing syllables: modeling OT computationally | <|reference_start|>Parsing syllables: modeling OT computationally: In this paper, I propose to implement syllabification in OT as a parser. I propose several innovations that result in a finite and small candidate set. The candidate set problem is handled with several moves: i) MAX and DEP violations are not hypothesized by the parser, ii) candidates are encoded locally, and iii) EVAL is applied constraint by constraint. The parser I propose is implemented in Prolog. It has a number of desirable consequences. First, it runs and thus provides an existence proof that syllabification can be implemented in OT. There are a number of other desirable consequences as well. First, constraints are implemented as finite-state transducers. Second, the parser makes several interesting claims about the phonological properties of so-called nonrecoverable insertions and deletions. Third, the implementation suggests some particular reformulations of some of the benchmark constraints in the OT arsenal, e.g. *COMPLEX, PARSE, ONSET, and NOCODA.<|reference_end|> | arxiv | @article{hammond1997parsing,
title={Parsing syllables: modeling OT computationally},
author={Michael Hammond (University of Arizona)},
journal={arXiv preprint arXiv:cmp-lg/9710004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710004},
primaryClass={cmp-lg cs.CL}
} | hammond1997parsing |
arxiv-669072 | cmp-lg/9710005 | Attaching Multiple Prepositional Phrases: Generalized Backed-off Estimation | <|reference_start|>Attaching Multiple Prepositional Phrases: Generalized Backed-off Estimation: There has recently been considerable interest in the use of lexically-based statistical techniques to resolve prepositional phrase attachments. To our knowledge, however, these investigations have only considered the problem of attaching the first PP, i.e., in a [V NP PP] configuration. In this paper, we consider one technique which has been successfully applied to this problem, backed-off estimation, and demonstrate how it can be extended to deal with the problem of multiple PP attachment. The multiple PP attachment introduces two related problems: sparser data (since multiple PPs are naturally rarer), and greater syntactic ambiguity (more attachment configurations which must be distinguished). We present and algorithm which solves this problem through re-use of the relatively rich data obtained from first PP training, in resolving subsequent PP attachments.<|reference_end|> | arxiv | @article{merlo1997attaching,
title={Attaching Multiple Prepositional Phrases: Generalized Backed-off
Estimation},
author={Paola Merlo (U. of Pennsylvania and University of Geneva) Matthew
Crocker (University of Edinburgh) Cathy Berthouzoz (University of Geneva)},
journal={arXiv preprint arXiv:cmp-lg/9710005},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710005},
primaryClass={cmp-lg cs.CL}
} | merlo1997attaching |
arxiv-669073 | cmp-lg/9710006 | Learning Features that Predict Cue Usage | <|reference_start|>Learning Features that Predict Cue Usage: Our goal is to identify the features that predict the occurrence and placement of discourse cues in tutorial explanations in order to aid in the automatic generation of explanations. Previous attempts to devise rules for text generation were based on intuition or small numbers of constructed examples. We apply a machine learning program, C4.5, to induce decision trees for cue occurrence and placement from a corpus of data coded for a variety of features previously thought to affect cue usage. Our experiments enable us to identify the features with most predictive power, and show that machine learning can be used to induce decision trees useful for text generation.<|reference_end|> | arxiv | @article{di eugenio1997learning,
title={Learning Features that Predict Cue Usage},
author={Barbara Di Eugenio (University of Pittsburgh), Johanna D. Moore
(University of Pittsburgh), Massimo Paolucci (University of Pittsburgh)},
journal={Proceedings of ACL/EACL97, Madrid, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710006},
primaryClass={cmp-lg cs.CL}
} | di eugenio1997learning |
arxiv-669074 | cmp-lg/9710007 | A Corpus-Based Investigation of Definite Description Use | <|reference_start|>A Corpus-Based Investigation of Definite Description Use: We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. The most interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins' and Prince's classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-new definites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation.<|reference_end|> | arxiv | @article{poesio1997a,
title={A Corpus-Based Investigation of Definite Description Use},
author={Massimo Poesio and Renata Vieira (University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9710007},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710007},
primaryClass={cmp-lg cs.CL}
} | poesio1997a |
arxiv-669075 | cmp-lg/9710008 | Probabilistic Event Categorization | <|reference_start|>Probabilistic Event Categorization: This paper describes the automation of a new text categorization task. The categories assigned in this task are more syntactically, semantically, and contextually complex than those typically assigned by fully automatic systems that process unseen test data. Our system for assigning these categories is a probabilistic classifier, developed with a recent method for formulating a probabilistic model from a predefined set of potential features. This paper focuses on feature selection. It presents a number of fully automatic features. It identifies and evaluates various approaches to organizing collocational properties into features, and presents the results of experiments covarying type of organization and type of property. We find that one organization is not best for all kinds of properties, so this is an experimental parameter worth investigating in NLP systems. In addition, the results suggest a way to take advantage of properties that are low frequency but strongly indicative of a class. The problems of recognizing and organizing the various kinds of contextual information required to perform a linguistically complex categorization task have rarely been systematically investigated in NLP.<|reference_end|> | arxiv | @article{wiebe1997probabilistic,
title={Probabilistic Event Categorization},
author={Janyce Wiebe (New Mexico State University), Rebecca Bruce (Southern
Methodist University), and Lei Duan (Sony Corporation)},
journal={Recent Advances in Natural Language Processing (RANLP-97),
European Commission, DG XIII, Tzigov Chark, Bulgaria, September 1997, pp.
163--170.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9710008},
primaryClass={cmp-lg cs.CL}
} | wiebe1997probabilistic |
arxiv-669076 | cmp-lg/9711001 | Probabilistic Constraint Logic Programming | <|reference_start|>Probabilistic Constraint Logic Programming: This paper addresses two central problems for probabilistic processing models: parameter estimation from incomplete data and efficient retrieval of most probable analyses. These questions have been answered satisfactorily only for probabilistic regular and context-free models. We address these problems for a more expressive probabilistic constraint logic programming model. We present a log-linear probability model for probabilistic constraint logic programming. On top of this model we define an algorithm to estimate the parameters and to select the properties of log-linear models from incomplete data. This algorithm is an extension of the improved iterative scaling algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm applies to log-linear models in general and is accompanied with suitable approximation methods when applied to large data spaces. Furthermore, we present an approach for searching for most probable analyses of the probabilistic constraint logic programming model. This method can be applied to the ambiguity resolution problem in natural language processing applications.<|reference_end|> | arxiv | @article{riezler1997probabilistic,
title={Probabilistic Constraint Logic Programming},
author={Stefan Riezler (University of Tuebingen)},
journal={arXiv preprint arXiv:cmp-lg/9711001},
year={1997},
number={Arbeitspapiere des SFB 340, Bericht Nr. 117, Oktober 1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711001},
primaryClass={cmp-lg cs.CL}
} | riezler1997probabilistic |
arxiv-669077 | cmp-lg/9711002 | Approximating Context-Free Grammars with a Finite-State Calculus | <|reference_start|>Approximating Context-Free Grammars with a Finite-State Calculus: Although adequate models of human language for syntactic analysis and semantic interpretation are of at least context-free complexity, for applications such as speech processing in which speed is important finite-state models are often preferred. These requirements may be reconciled by using the more complex grammar to automatically derive a finite-state approximation which can then be used as a filter to guide speech recognition or to reject many hypotheses at an early stage of processing. A method is presented here for calculating such finite-state approximations from context-free grammars. It is essentially different from the algorithm introduced by Pereira and Wright (1991; 1996), is faster in some cases, and has the advantage of being open-ended and adaptable.<|reference_end|> | arxiv | @article{grimley-evans1997approximating,
title={Approximating Context-Free Grammars with a Finite-State Calculus},
author={Edmund Grimley-Evans},
journal={Proceedings of ACL-EACL 97, Madrid, pp 452-459, 1997.},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711002},
primaryClass={cmp-lg cs.CL}
} | grimley-evans1997approximating |
arxiv-669078 | cmp-lg/9711003 | Probabilistic Parsing Using Left Corner Language Models | <|reference_start|>Probabilistic Parsing Using Left Corner Language Models: We introduce a novel parser based on a probabilistic version of a left-corner parser. The left-corner strategy is attractive because rule probabilities can be conditioned on both top-down goals and bottom-up derivations. We develop the underlying theory and explain how a grammar can be induced from analyzed data. We show that the left-corner approach provides an advantage over simple top-down probabilistic context-free grammars in parsing the Wall Street Journal using a grammar induced from the Penn Treebank. We also conclude that the Penn Treebank provides a fairly weak testbed due to the flatness of its bracketings and to the obvious overgeneration and undergeneration of its induced grammar.<|reference_end|> | arxiv | @article{manning1997probabilistic,
title={Probabilistic Parsing Using Left Corner Language Models},
author={Christopher D. Manning (University of Sydney) and Bob Carpenter
(Lucent Technologies Bell Labs)},
journal={Proceedings of the Fifth International Workshop on Parsing
Technologies, MIT, Boston MA, 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711003},
primaryClass={cmp-lg cs.CL}
} | manning1997probabilistic |
arxiv-669079 | cmp-lg/9711004 | Variation and Synthetic Speech | <|reference_start|>Variation and Synthetic Speech: We describe the approach to linguistic variation taken by the Motorola speech synthesizer. A pan-dialectal pronunciation dictionary is described, which serves as the training data for a neural network based letter-to-sound converter. Subsequent to dictionary retrieval or letter-to-sound generation, pronunciations are submitted a neural network based postlexical module. The postlexical module has been trained on aligned dictionary pronunciations and hand-labeled narrow phonetic transcriptions. This architecture permits the learning of individual postlexical variation, and can be retrained for each speaker whose voice is being modeled for synthesis. Learning variation in this way can result in greater naturalness for the synthetic speech that is produced by the system.<|reference_end|> | arxiv | @article{miller1997variation,
title={Variation and Synthetic Speech},
author={Corey Miller, Orhan Karaali, and Noel Massey},
journal={arXiv preprint arXiv:cmp-lg/9711004},
year={1997},
number={Motorola-SSML-1},
archivePrefix={arXiv},
eprint={cmp-lg/9711004},
primaryClass={cmp-lg cs.CL}
} | miller1997variation |
arxiv-669080 | cmp-lg/9711005 | Some apparently disjoint aims and requirements for grammar development environments: the case of natural language generation | <|reference_start|>Some apparently disjoint aims and requirements for grammar development environments: the case of natural language generation: Grammar development environments (GDE's) for analysis and for generation have not yet come together. Despite the fact that analysis-oriented GDE's (such as ALEP) may include some possibility of sentence generation, the development techniques and kinds of resources suggested are apparently not those required for practical, large-scale natural language generation work. Indeed, there is no use of `standard' (i.e., analysis-oriented) GDE's in current projects/applications targetting the generation of fluent, coherent texts. This unsatisfactory situation requires some analysis and explanation, which this paper attempts using as an example an extensive GDE for generation. The support provided for distributed large-scale grammar development, multilinguality, and resource maintenance are discussed and contrasted with analysis-oriented approaches.<|reference_end|> | arxiv | @article{bateman1997some,
title={Some apparently disjoint aims and requirements for grammar development
environments: the case of natural language generation},
author={John A. Bateman (Language and Communication Research Centre, Dept. of
English Studies, University of Stirling, Scotland)},
journal={arXiv preprint arXiv:cmp-lg/9711005},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711005},
primaryClass={cmp-lg cs.CL}
} | bateman1997some |
arxiv-669081 | cmp-lg/9711006 | Contextual Information and Specific Language Models for Spoken Language Understanding | <|reference_start|>Contextual Information and Specific Language Models for Spoken Language Understanding: In this paper we explain how contextual expectations are generated and used in the task-oriented spoken language understanding system Dialogos. The hard task of recognizing spontaneous speech on the telephone may greatly benefit from the use of specific language models during the recognition of callers' utterances. By 'specific language models' we mean a set of language models that are trained on contextually appropriated data, and that are used during different states of the dialogue on the basis of the information sent to the acoustic level by the dialogue management module. In this paper we describe how the specific language models are obtained on the basis of contextual information. The experimental result we report show that recognition and understanding performance are improved thanks to the use of specific language models.<|reference_end|> | arxiv | @article{baggia1997contextual,
title={Contextual Information and Specific Language Models for Spoken Language
Understanding},
author={Paolo Baggia, Morena Danieli, Elisabetta Gerbino, Loreta M. Moisa and
Cosmin Popovici (CSELT - Turin, Italy)},
journal={Proceedings of SPECOM'97, Cluj-Napoca, Romania, pp. 51-56},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711006},
primaryClass={cmp-lg cs.CL}
} | baggia1997contextual |
arxiv-669082 | cmp-lg/9711007 | Language Modelling For Task-Oriented Domains | <|reference_start|>Language Modelling For Task-Oriented Domains: This paper is focused on the language modelling for task-oriented domains and presents an accurate analysis of the utterances acquired by the Dialogos spoken dialogue system. Dialogos allows access to the Italian Railways timetable by using the telephone over the public network. The language modelling aspects of specificity and behaviour to rare events are studied. A technique for getting a language model more robust, based on sentences generated by grammars, is presented. Experimental results show the benefit of the proposed technique. The increment of performance between language models created using grammars and usual ones, is higher when the amount of training material is limited. Therefore this technique can give an advantage especially for the development of language models in a new domain.<|reference_end|> | arxiv | @article{popovici1997language,
title={Language Modelling For Task-Oriented Domains},
author={Cosmin Popovici (ICI - Bucuresti, Romania), Paolo Baggia (CSELT -
Turin, Italy)},
journal={Proceedings of EUROSPEECH'97, Rhodes, Greece, vol. 3, pp.
1459-1462},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711007},
primaryClass={cmp-lg cs.CL}
} | popovici1997language |
arxiv-669083 | cmp-lg/9711008 | On the use of expectations for detecting and repairing human-machine miscommunication | <|reference_start|>On the use of expectations for detecting and repairing human-machine miscommunication: In this paper I describe how miscommunication problems are dealt with in the spoken language system DIALOGOS. The dialogue module of the system exploits dialogic expectations in a twofold way: to model what future user utterance might be about (predictions), and to account how the user's next utterance may be related to previous ones in the ongoing interaction (pragmatic-based expectations). The analysis starts from the hypothesis that the occurrence of miscommunication is concomitant with two pragmatic phenomena: the deviation of the user from the expected behaviour and the generation of a conversational implicature. A preliminary evaluation of a large amount of interactions between subjects and DIALOGOS shows that the system performance is enhanced by the uses of both predictions and pragmatic-based expectations.<|reference_end|> | arxiv | @article{danieli1997on,
title={On the use of expectations for detecting and repairing human-machine
miscommunication},
author={Morena Danieli (CSELT - Turin, Italy)},
journal={Proceedings of AAAI-96 Workshop on Detecting, Preventing, and
Repairing Human-Machine Miscommunications, Portland, OR, pp. 87-93},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711008},
primaryClass={cmp-lg cs.CL}
} | danieli1997on |
arxiv-669084 | cmp-lg/9711009 | Towards an Improved Performance Measure for Language Models | <|reference_start|>Towards an Improved Performance Measure for Language Models: In this paper a first attempt at deriving an improved performance measure for language models, the probability ratio measure (PRM) is described. In a proof of concept experiment, it is shown that PRM correlates better with recognition accuracy and can lead to better recognition results when used as the optimisation criterion of a clustering algorithm. Inspite of the approximations and limitations of this preliminary work, the results are very encouraging and should justify more work along the same lines.<|reference_end|> | arxiv | @article{ueberla1997towards,
title={Towards an Improved Performance Measure for Language Models},
author={Joerg P. Ueberla (Speech Machines - DRA Malvern)},
journal={arXiv preprint arXiv:cmp-lg/9711009},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711009},
primaryClass={cmp-lg cs.CL}
} | ueberla1997towards |
arxiv-669085 | cmp-lg/9711010 | Application-driven automatic subgrammar extraction | <|reference_start|>Application-driven automatic subgrammar extraction: The space and run-time requirements of broad coverage grammars appear for many applications unreasonably large in relation to the relative simplicity of the task at hand. On the other hand, handcrafted development of application-dependent grammars is in danger of duplicating work which is then difficult to re-use in other contexts of application. To overcome this problem, we present in this paper a procedure for the automatic extraction of application-tuned consistent subgrammars from proved large-scale generation grammars. The procedure has been implemented for large-scale systemic grammars and builds on the formal equivalence between systemic grammars and typed unification based grammars. Its evaluation for the generation of encyclopedia entries is described, and directions of future development, applicability, and extensions are discussed.<|reference_end|> | arxiv | @article{henschel1997application-driven,
title={Application-driven automatic subgrammar extraction},
author={Renate Henschel (Centre for Cognitive Science, University of
Edinburgh) and John A. Bateman (Language and Communication Research Centre,
Dept. of English Studies, University of Stirling)},
journal={arXiv preprint arXiv:cmp-lg/9711010},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711010},
primaryClass={cmp-lg cs.CL}
} | henschel1997application-driven |
arxiv-669086 | cmp-lg/9711011 | The effect of alternative tree representations on tree bank grammars | <|reference_start|>The effect of alternative tree representations on tree bank grammars: The performance of PCFGs estimated from tree banks is sensitive to the particular way in which linguistic constructions are represented as trees in the tree bank. This paper presents a theoretical analysis of the effect of different tree representations for PP attachment on PCFG models, and introduces a new methodology for empirically examining such effects using tree transformations. It shows that one transformation, which copies the label of a parent node onto the labels of its children, can improve the performance of a PCFG model in terms of labelled precision and recall on held out data from 73% (precision) and 69% (recall) to 80% and 79% respectively. It also points out that if only maximum likelihood parses are of interest then many productions can be ignored, since they are subsumed by combinations of other productions in the grammar. In the Penn II tree bank grammar, almost 9% of productions are subsumed in this way.<|reference_end|> | arxiv | @article{johnson1997the,
title={The effect of alternative tree representations on tree bank grammars},
author={Mark Johnson (Brown University)},
journal={arXiv preprint arXiv:cmp-lg/9711011},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711011},
primaryClass={cmp-lg cs.CL}
} | johnson1997the |
arxiv-669087 | cmp-lg/9711012 | Proof Nets and the Complexity of Processing Center-Embedded Constructions | <|reference_start|>Proof Nets and the Complexity of Processing Center-Embedded Constructions: This paper shows how proof nets can be used to formalize the notion of ``incomplete dependency'' used in psycholinguistic theories of the unacceptability of center-embedded constructions. Such theories of human language processing can usually be restated in terms of geometrical constraints on proof nets. The paper ends with a discussion of the relationship between these constraints and incremental semantic interpretation.<|reference_end|> | arxiv | @article{johnson1997proof,
title={Proof Nets and the Complexity of Processing Center-Embedded
Constructions},
author={Mark Johnson (Brown University)},
journal={arXiv preprint arXiv:cmp-lg/9711012},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711012},
primaryClass={cmp-lg cs.CL}
} | johnson1997proof |
arxiv-669088 | cmp-lg/9711013 | Features as Resources in R-LFG | <|reference_start|>Features as Resources in R-LFG: This paper introduces a non-unification-based version of LFG called R-LFG (Resource-based Lexical Functional Grammar), which combines elements from both LFG and Linear Logic. The paper argues that a resource sensitive account provides a simpler treatment of many linguistic uses of non-monotonic devices in LFG, such as existential constraints and constraint equations.<|reference_end|> | arxiv | @article{johnson1997features,
title={Features as Resources in R-LFG},
author={Mark Johnson (Brown University)},
journal={arXiv preprint arXiv:cmp-lg/9711013},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711013},
primaryClass={cmp-lg cs.CL}
} | johnson1997features |
arxiv-669089 | cmp-lg/9711014 | Type-driven semantic interpretation and feature dependencies in R-LFG | <|reference_start|>Type-driven semantic interpretation and feature dependencies in R-LFG: Once one has enriched LFG's formal machinery with the linear logic mechanisms needed for semantic interpretation as proposed by Dalrymple et. al., it is natural to ask whether these make any existing components of LFG redundant. As Dalrymple and her colleagues note, LFG's f-structure completeness and coherence constraints fall out as a by-product of the linear logic machinery they propose for semantic interpretation, thus making those f-structure mechanisms redundant. Given that linear logic machinery or something like it is independently needed for semantic interpretation, it seems reasonable to explore the extent to which it is capable of handling feature structure constraints as well. R-LFG represents the extreme position that all linguistically required feature structure dependencies can be captured by the resource-accounting machinery of a linear or similiar logic independently needed for semantic interpretation, making LFG's unification machinery redundant. The goal is to show that LFG linguistic analyses can be expressed as clearly and perspicuously using the smaller set of mechanisms of R-LFG as they can using the much larger set of unification-based mechanisms in LFG: if this is the case then we will have shown that positing these extra f-structure mechanisms is not linguistically warranted.<|reference_end|> | arxiv | @article{johnson1997type-driven,
title={Type-driven semantic interpretation and feature dependencies in R-LFG},
author={Mark Johnson (Brown University)},
journal={arXiv preprint arXiv:cmp-lg/9711014},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9711014},
primaryClass={cmp-lg cs.CL}
} | johnson1997type-driven |
arxiv-669090 | cmp-lg/9712001 | Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation | <|reference_start|>Applying Explanation-based Learning to Control and Speeding-up Natural Language Generation: This paper presents a method for the automatic extraction of subgrammars to control and speeding-up natural language generation NLG. The method is based on explanation-based learning (EBL). The main advantage for the proposed new method for NLG is that the complexity of the grammatical decision making process during NLG can be vastly reduced, because the EBL method supports the adaption of a NLG system to a particular use of a language.<|reference_end|> | arxiv | @article{neumann1997applying,
title={Applying Explanation-based Learning to Control and Speeding-up Natural
Language Generation},
author={Guenter Neumann},
journal={arXiv preprint arXiv:cmp-lg/9712001},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712001},
primaryClass={cmp-lg cs.CL}
} | neumann1997applying |
arxiv-669091 | cmp-lg/9712002 | Machine Learning of User Profiles: Representational Issues | <|reference_start|>Machine Learning of User Profiles: Representational Issues: As more information becomes available electronically, tools for finding information of interest to users becomes increasingly important. The goal of the research described here is to build a system for generating comprehensible user profiles that accurately capture user interest with minimum user interaction. The research described here focuses on the importance of a suitable generalization hierarchy and representation for learning profiles which are predictively accurate and comprehensible. In our experiments we evaluated both traditional features based on weighted term vectors as well as subject features corresponding to categories which could be drawn from a thesaurus. Our experiments, conducted in the context of a content-based profiling system for on-line newspapers on the World Wide Web (the IDD News Browser), demonstrate the importance of a generalization hierarchy and the promise of combining natural language processing techniques with machine learning (ML) to address an information retrieval (IR) problem.<|reference_end|> | arxiv | @article{bloedorn1997machine,
title={Machine Learning of User Profiles: Representational Issues},
author={Eric Bloedorn (MITRE Corporation and George Mason University),
Inderjeet Mani (MITRE Corporation) and T. Richard MacMillan (MITRE
Corporation)},
journal={arXiv preprint arXiv:cmp-lg/9712002},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712002},
primaryClass={cmp-lg cs.CL cs.LG}
} | bloedorn1997machine |
arxiv-669092 | cmp-lg/9712003 | Context as a Spurious Concept | <|reference_start|>Context as a Spurious Concept: I take issue with AI formalizations of context, primarily the formalization by McCarthy and Buvac, that regard context as an undefined primitive whose formalization can be the same in many different kinds of AI tasks. In particular, any theory of context in natural language must take the special nature of natural language into account and cannot regard context simply as an undefined primitive. I show that there is no such thing as a coherent theory of context simpliciter -- context pure and simple -- and that context in natural language is not the same kind of thing as context in KR. In natural language, context is constructed by the speaker and the interpreter, and both have considerable discretion in so doing. Therefore, a formalization based on pre-defined contexts and pre-defined `lifting axioms' cannot account for how context is used in real-world language.<|reference_end|> | arxiv | @article{hirst1997context,
title={Context as a Spurious Concept},
author={Graeme Hirst (University of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9712003},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712003},
primaryClass={cmp-lg cs.CL}
} | hirst1997context |
arxiv-669093 | cmp-lg/9712004 | Multi-document Summarization by Graph Search and Matching | <|reference_start|>Multi-document Summarization by Graph Search and Matching: We describe a new method for summarizing similarities and differences in a pair of related documents using a graph representation for text. Concepts denoted by words, phrases, and proper names in the document are represented positionally as nodes in the graph along with edges corresponding to semantic relations between items. Given a perspective in terms of which the pair of documents is to be summarized, the algorithm first uses a spreading activation technique to discover, in each document, nodes semantically related to the topic. The activated graphs of each document are then matched to yield a graph corresponding to similarities and differences between the pair, which is rendered in natural language. An evaluation of these techniques has been carried out.<|reference_end|> | arxiv | @article{mani1997multi-document,
title={Multi-document Summarization by Graph Search and Matching},
author={Inderjeet Mani (MITRE Corporation), Eric Bloedorn (MITRE Corporation)},
journal={arXiv preprint arXiv:cmp-lg/9712004},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712004},
primaryClass={cmp-lg cs.CL}
} | mani1997multi-document |
arxiv-669094 | cmp-lg/9712005 | Topic Graph Generation for Query Navigation: Use of Frequency Classes for Topic Extraction | <|reference_start|>Topic Graph Generation for Query Navigation: Use of Frequency Classes for Topic Extraction: To make an interactive guidance mechanism for document retrieval systems, we developed a user-interface which presents users the visualized map of topics at each stage of retrieval process. Topic words are automatically extracted by frequency analysis and the strength of the relationships between topic words is measured by their co-occurrence. A major factor affecting a user's impression of a given topic word graph is the balance between common topic words and specific topic words. By using frequency classes for topic word extraction, we made it possible to select well-balanced set of topic words, and to adjust the balance of common and specific topic words.<|reference_end|> | arxiv | @article{niwa1997topic,
title={Topic Graph Generation for Query Navigation: Use of Frequency Classes
for Topic Extraction},
author={Yoshiki Niwa, Shingo Nishioka, Makoto Iwayama, Akihiko Takano
(Advanced Research Laboratory, Hitachi, Ltd.), and Yoshihiko Nitta (Dept. of
Economics, Nihon University)},
journal={Proceedings of NLPRS'97, Natural Language Processing Pacific Rim
Symposium '97, pages 95-100, Phuket, Thailand, Dec. 1997},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712005},
primaryClass={cmp-lg cs.CL}
} | niwa1997topic |
arxiv-669095 | cmp-lg/9712006 | "I don't believe in word senses" | <|reference_start|>"I don't believe in word senses": Word sense disambiguation assumes word senses. Within the lexicography and linguistics literature, they are known to be very slippery entities. The paper looks at problems with existing accounts of `word sense' and describes the various kinds of ways in which a word's meaning can deviate from its core meaning. An analysis is presented in which word senses are abstractions from clusters of corpus citations, in accordance with current lexicographic practice. The corpus citations, not the word senses, are the basic objects in the ontology. The corpus citations will be clustered into senses according to the purposes of whoever or whatever does the clustering. In the absence of such purposes, word senses do not exist. Word sense disambiguation also needs a set of word senses to disambiguate between. In most recent work, the set has been taken from a general-purpose lexical resource, with the assumption that the lexical resource describes the word senses of English/French/..., between which NLP applications will need to disambiguate. The implication of the paper is, by contrast, that word senses exist only relative to a task.<|reference_end|> | arxiv | @article{kilgarriff1997"i,
title={"I don't believe in word senses"},
author={Adam Kilgarriff (ITRI, University of Brighton)},
journal={arXiv preprint arXiv:cmp-lg/9712006},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712006},
primaryClass={cmp-lg cs.CL}
} | kilgarriff1997"i |
arxiv-669096 | cmp-lg/9712007 | Foreground and Background Lexicons and Word Sense Disambiguation for Information Extraction | <|reference_start|>Foreground and Background Lexicons and Word Sense Disambiguation for Information Extraction: Lexicon acquisition from machine-readable dictionaries and corpora is currently a dynamic field of research, yet it is often not clear how lexical information so acquired can be used, or how it relates to structured meaning representations. In this paper I look at this issue in relation to Information Extraction (hereafter IE), and one subtask for which both lexical and general knowledge are required, Word Sense Disambiguation (WSD). The analysis is based on the widely-used, but little-discussed distinction between an IE system's foreground lexicon, containing the domain's key terms which map onto the database fields of the output formalism, and the background lexicon, containing the remainder of the vocabulary. For the foreground lexicon, human lexicography is required. For the background lexicon, automatic acquisition is appropriate. For the foreground lexicon, WSD will occur as a by-product of finding a coherent semantic interpretation of the input. WSD techniques as discussed in recent literature are suited only to the background lexicon. Once the foreground/background distinction is developed, there is a match between what is possible, given the state of the art in WSD, and what is required, for high-quality IE.<|reference_end|> | arxiv | @article{kilgarriff1997foreground,
title={Foreground and Background Lexicons and Word Sense Disambiguation for
Information Extraction},
author={Adam Kilgarriff (ITRI, University of Brighton)},
journal={Proc. International Workshop on Lexically Driven Information
Extraction. Frascati, Italy. July 1997. Pp 51--62.},
year={1997},
number={ITRI-97-04},
archivePrefix={arXiv},
eprint={cmp-lg/9712007},
primaryClass={cmp-lg cs.CL}
} | kilgarriff1997foreground |
arxiv-669097 | cmp-lg/9712008 | What is word sense disambiguation good for? | <|reference_start|>What is word sense disambiguation good for?: Word sense disambiguation has developed as a sub-area of natural language processing, as if, like parsing, it was a well-defined task which was a pre-requisite to a wide range of language-understanding applications. First, I review earlier work which shows that a set of senses for a word is only ever defined relative to a particular human purpose, and that a view of word senses as part of the linguistic furniture lacks theoretical underpinnings. Then, I investigate whether and how word sense ambiguity is in fact a problem for different varieties of NLP application.<|reference_end|> | arxiv | @article{kilgarriff1997what,
title={What is word sense disambiguation good for?},
author={Adam Kilgarriff (ITRI, University of Brighton)},
journal={Proc. Natural Language Processing Pacific Rim Symposium. Phuket,
Thailand. December 1997. Pp 209--214.},
year={1997},
number={ITRI-97-08},
archivePrefix={arXiv},
eprint={cmp-lg/9712008},
primaryClass={cmp-lg cs.CL}
} | kilgarriff1997what |
arxiv-669098 | cmp-lg/9712009 | Speech Repairs, Intonational Boundaries and Discourse Markers: Modeling Speakers' Utterances in Spoken Dialog | <|reference_start|>Speech Repairs, Intonational Boundaries and Discourse Markers: Modeling Speakers' Utterances in Spoken Dialog: In this thesis, we present a statistical language model for resolving speech repairs, intonational boundaries and discourse markers. Rather than finding the best word interpretation for an acoustic signal, we redefine the speech recognition problem to so that it also identifies the POS tags, discourse markers, speech repairs and intonational phrase endings (a major cue in determining utterance units). Adding these extra elements to the speech recognition problem actually allows it to better predict the words involved, since we are able to make use of the predictions of boundary tones, discourse markers and speech repairs to better account for what word will occur next. Furthermore, we can take advantage of acoustic information, such as silence information, which tends to co-occur with speech repairs and intonational phrase endings, that current language models can only regard as noise in the acoustic signal. The output of this language model is a much fuller account of the speaker's turn, with part-of-speech assigned to each word, intonation phrase endings and discourse markers identified, and speech repairs detected and corrected. In fact, the identification of the intonational phrase endings, discourse markers, and resolution of the speech repairs allows the speech recognizer to model the speaker's utterances, rather than simply the words involved, and thus it can return a more meaningful analysis of the speaker's turn for later processing.<|reference_end|> | arxiv | @article{heeman1997speech,
title={Speech Repairs, Intonational Boundaries and Discourse Markers: Modeling
Speakers' Utterances in Spoken Dialog},
author={Peter A. Heeman (University of Rochester)},
journal={arXiv preprint arXiv:cmp-lg/9712009},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712009},
primaryClass={cmp-lg cs.CL}
} | heeman1997speech |
arxiv-669099 | cmp-lg/9712010 | Orthographic Structuring of Human Speech and Texts: Linguistic Application of Recurrence Quantification Analysis | <|reference_start|>Orthographic Structuring of Human Speech and Texts: Linguistic Application of Recurrence Quantification Analysis: A methodology based upon recurrence quantification analysis is proposed for the study of orthographic structure of written texts. Five different orthographic data sets (20th century Italian poems, 20th century American poems, contemporary Swedish poems with their corresponding Italian translations, Italian speech samples, and American speech samples) were subjected to recurrence quantification analysis, a procedure which has been found to be diagnostically useful in the quantitative assessment of ordered series in fields such as physics, molecular dynamics, physiology, and general signal processing. Recurrence quantification was developed from recurrence plots as applied to the analysis of nonlinear, complex systems in the physical sciences, and is based on the computation of a distance matrix of the elements of an ordered series (in this case the letters consituting selected speech and poetic texts). From a strictly mathematical view, the results show the possibility of demonstrating invariance between different language exemplars despite the apparent low-level of coding (orthography). Comparison with the actual texts confirms the ability of the method to reveal recurrent structures, and their complexity. Using poems as a reference standard for judging speech complexity, the technique exhibits language independence, order dependence and freedom from pure statistical characteristics of studied sequences, as well as consistency with easily identifiable texts. Such studies may provide phenomenological markers of hidden structure as coded by the purely orthographic level.<|reference_end|> | arxiv | @article{orsucci1997orthographic,
title={Orthographic Structuring of Human Speech and Texts: Linguistic
Application of Recurrence Quantification Analysis},
author={F. Orsucci, K. Walter, A. Giuliani, C. L. Webber, Jr., J. P. Zbilut},
journal={arXiv preprint arXiv:cmp-lg/9712010},
year={1997},
archivePrefix={arXiv},
eprint={cmp-lg/9712010},
primaryClass={cmp-lg cs.CL}
} | orsucci1997orthographic |
arxiv-669100 | cmp-lg/9801001 | Hierarchical Non-Emitting Markov Models | <|reference_start|>Hierarchical Non-Emitting Markov Models: We describe a simple variant of the interpolated Markov model with non-emitting state transitions and prove that it is strictly more powerful than any Markov model. More importantly, the non-emitting model outperforms the classic interpolated model on the natural language texts under a wide range of experimental conditions, with only a modest increase in computational requirements. The non-emitting model is also much less prone to overfitting. Keywords: Markov model, interpolated Markov model, hidden Markov model, mixture modeling, non-emitting state transitions, state-conditional interpolation, statistical language model, discrete time series, Brown corpus, Wall Street Journal.<|reference_end|> | arxiv | @article{ristad1998hierarchical,
title={Hierarchical Non-Emitting Markov Models},
author={Eric Sven Ristad and Robert G. Thomas},
journal={arXiv preprint arXiv:cmp-lg/9801001},
year={1998},
number={CS-TR-544-97},
archivePrefix={arXiv},
eprint={cmp-lg/9801001},
primaryClass={cmp-lg cs.CL}
} | ristad1998hierarchical |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.