corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-668501 | cmp-lg/9412001 | Dependency Grammar and the Parsing of Chinese Sentences | <|reference_start|>Dependency Grammar and the Parsing of Chinese Sentences: Dependency Grammar has been used by linguists as the basis of the syntactic components of their grammar formalisms. It has also been used in natural language parsing. In China, attempts have been made to use this grammar formalism to parse Chinese sentences using corpus-based techniques. This paper reviews the properties of Dependency Grammar as embodied in four axioms for the well-formedness conditions for dependency structures. It is shown that allowing multiple governors as done by some followers of this formalism is unnecessary. The practice of augmenting Dependency Grammar with functional labels is also discussed in the light of building functional structures when the sentence is parsed. This will also facilitate semantic interpretation.<|reference_end|> | arxiv | @article{lai1994dependency,
title={Dependency Grammar and the Parsing of Chinese Sentences},
author={Bong Yeung Tom Lai (City University of Hong Kong & Tsinghua
University, Beijing) and Changning Huang (Tsinghua University, Beijing)},
journal={arXiv preprint arXiv:cmp-lg/9412001},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412001},
primaryClass={cmp-lg cs.CL}
} | lai1994dependency |
arxiv-668502 | cmp-lg/9412002 | N-Gram Cluster Identification During Empirical Knowledge Representation Generation | <|reference_start|>N-Gram Cluster Identification During Empirical Knowledge Representation Generation: This paper presents an overview of current research concerning knowledge extraction from technical texts. In particular, the use of empirical techniques during the identification and generation of a semantic representation is considered. A key step is the discovery of useful n-grams and correlations between clusters of these n-grams.<|reference_end|> | arxiv | @article{collier1994n-gram,
title={N-Gram Cluster Identification During Empirical Knowledge Representation
Generation},
author={Robin Collier (Department of Computer Science, University of
Sheffield, England)},
journal={arXiv preprint arXiv:cmp-lg/9412002},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412002},
primaryClass={cmp-lg cs.CL}
} | collier1994n-gram |
arxiv-668503 | cmp-lg/9412003 | An Extended Clustering Algorithm for Statistical Language Models | <|reference_start|>An Extended Clustering Algorithm for Statistical Language Models: Statistical language models frequently suffer from a lack of training data. This problem can be alleviated by clustering, because it reduces the number of free parameters that need to be trained. However, clustered models have the following drawback: if there is ``enough'' data to train an unclustered model, then the clustered variant may perform worse. On currently used language modeling corpora, e.g. the Wall Street Journal corpus, how do the performances of a clustered and an unclustered model compare? While trying to address this question, we develop the following two ideas. First, to get a clustering algorithm with potentially high performance, an existing algorithm is extended to deal with higher order N-grams. Second, to make it possible to cluster large amounts of training data more efficiently, a heuristic to speed up the algorithm is presented. The resulting clustering algorithm can be used to cluster trigrams on the Wall Street Journal corpus and the language models it produces can compete with existing back-off models. Especially when there is only little training data available, the clustered models clearly outperform the back-off models.<|reference_end|> | arxiv | @article{ueberla1994an,
title={An Extended Clustering Algorithm for Statistical Language Models},
author={Joerg P. Ueberla (Forum Technology - DRA Malvern)},
journal={arXiv preprint arXiv:cmp-lg/9412003},
year={1994},
number={DRA/CIS(CSE1)/RN94/13},
archivePrefix={arXiv},
eprint={cmp-lg/9412003},
primaryClass={cmp-lg cs.CL}
} | ueberla1994an |
arxiv-668504 | cmp-lg/9412004 | Knowledge Representation for Lexical Semantics: Is Standard First Order Logic Enough? | <|reference_start|>Knowledge Representation for Lexical Semantics: Is Standard First Order Logic Enough?: Natural language understanding applications such as interactive planning and face-to-face translation require extensive inferencing. Many of these inferences are based on the meaning of particular open class words. Providing a representation that can support such lexically-based inferences is a primary concern of lexical semantics. The representation language of first order logic has well-understood semantics and a multitude of inferencing systems have been implemented for it. Thus it is a prime candidate to serve as a lexical semantics representation. However, we argue that FOL, although a good starting point, needs to be extended before it can efficiently and concisely support all the lexically-based inferences needed.<|reference_end|> | arxiv | @article{light1994knowledge,
title={Knowledge Representation for Lexical Semantics: Is Standard First Order
Logic Enough?},
author={Marc Light and Lenhart Schubert (University of Rochester)},
journal={arXiv preprint arXiv:cmp-lg/9412004},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412004},
primaryClass={cmp-lg cs.CL}
} | light1994knowledge |
arxiv-668505 | cmp-lg/9412005 | Segmenting speech without a lexicon: The roles of phonotactics and speech source | <|reference_start|>Segmenting speech without a lexicon: The roles of phonotactics and speech source: Infants face the difficult problem of segmenting continuous speech into words without the benefit of a fully developed lexicon. Several sources of information in speech might help infants solve this problem, including prosody, semantic correlations and phonotactics. Research to date has focused on determining to which of these sources infants might be sensitive, but little work has been done to determine the potential usefulness of each source. The computer simulations reported here are a first attempt to measure the usefulness of distributional and phonotactic information in segmenting phoneme sequences. The algorithms hypothesize different segmentations of the input into words and select the best hypothesis according to the Minimum Description Length principle. Our results indicate that while there is some useful information in both phoneme distributions and phonotactic rules, the combination of both sources is most useful.<|reference_end|> | arxiv | @article{cartwright1994segmenting,
title={Segmenting speech without a lexicon: The roles of phonotactics and
speech source},
author={Timothy Andrew Cartwright and Michael R. Brent},
journal={arXiv preprint arXiv:cmp-lg/9412005},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412005},
primaryClass={cmp-lg cs.CL}
} | cartwright1994segmenting |
arxiv-668506 | cmp-lg/9412006 | Robust stochastic parsing using the inside-outside algorithm | <|reference_start|>Robust stochastic parsing using the inside-outside algorithm: The paper describes a parser of sequences of (English) part-of-speech labels which utilises a probabilistic grammar trained using the inside-outside algorithm. The initial (meta)grammar is defined by a linguist and further rules compatible with metagrammatical constraints are automatically generated. During training, rules with very low probability are rejected yielding a wide-coverage parser capable of ranking alternative analyses. A series of corpus-based experiments describe the parser's performance.<|reference_end|> | arxiv | @article{briscoe1994robust,
title={Robust stochastic parsing using the inside-outside algorithm},
author={Briscoe, Ted and Waegner, Nick (University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9412006},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412006},
primaryClass={cmp-lg cs.CL}
} | briscoe1994robust |
arxiv-668507 | cmp-lg/9412007 | Coupling Phonology and Phonetics in a Constraint-Based Gestural Model | <|reference_start|>Coupling Phonology and Phonetics in a Constraint-Based Gestural Model: An implemented approach which couples a constraint-based phonology component with an articulatory speech synthesizer is proposed. Articulatory gestures ensure a tight connection between both components, as they comprise both physical-phonetic and phonological aspects. The phonological modelling of e.g. syllabification and phonological processes such as German final devoicing is expressed in the constraint logic programming language CUF. Extending CUF by arithmetic constraints allows the simultaneous description of both phonology and phonetics. Thus declarative lexicalist theories of grammar such as HPSG may be enriched up to the level of detailed phonetic realisation. Initial acoustic demonstrations show that our approach is in principle capable of synthesizing full utterances in a linguistically motivated fashion.<|reference_end|> | arxiv | @article{walther1994coupling,
title={Coupling Phonology and Phonetics in a Constraint-Based Gestural Model},
author={Markus Walther (University of Duesseldorf) and Bernd J. Kroeger
(University of Cologne, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9412007},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412007},
primaryClass={cmp-lg cs.CL}
} | walther1994coupling |
arxiv-668508 | cmp-lg/9412008 | Analysis of Japanese Compound Nouns using Collocational Information | <|reference_start|>Analysis of Japanese Compound Nouns using Collocational Information: Analyzing compound nouns is one of the crucial issues for natural language processing systems, in particular for those systems that aim at a wide coverage of domains. In this paper, we propose a method to analyze structures of Japanese compound nouns by using both word collocations statistics and a thesaurus. An experiment is conducted with 160,000 word collocations to analyze compound nouns of with an average length of 4.9 characters. The accuracy of this method is about 80%.<|reference_end|> | arxiv | @article{yosiyuki1994analysis,
title={Analysis of Japanese Compound Nouns using Collocational Information},
author={Kobayasi Yosiyuki, Takunaga Takenobu and Tanaka Hozumi},
journal={arXiv preprint arXiv:cmp-lg/9412008},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9412008},
primaryClass={cmp-lg cs.CL}
} | yosiyuki1994analysis |
arxiv-668509 | cmp-lg/9501001 | Using default inheritance to describe LTAG | <|reference_start|>Using default inheritance to describe LTAG: We present the results of an investigation into how the set of elementary trees of a Lexicalized Tree Adjoining Grammar can be represented in the lexical knowledge representation language DATR (Evans & Gazdar 1989a,b). The LTAG under consideration is based on the one described in Abeille et al. (1990). Our approach is similar to that of Vijay-Shanker & Schabes (1992) in that we formulate an inheritance hierarchy that efficiently encodes the elementary trees. However, rather than creating a new representation formalism for this task, we employ techniques of established utility in other lexically-oriented frameworks. In particular, we show how DATR's default mechanism can be used to eliminate the need for a non-immediate dominance relation in the descriptions of the surface LTAG entries. This allows us to embed the tree structures in the feature theory in a manner reminiscent of HPSG subcategorisation frames, and hence express lexical rules as relations over feature structures.<|reference_end|> | arxiv | @article{evans1995using,
title={Using default inheritance to describe LTAG},
author={Roger Evans (University of Brighton), Gerald Gazdar, David Weir
(University of Sussex)},
journal={Proceedings of TAG+3, 1994},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9501001},
primaryClass={cmp-lg cs.CL}
} | evans1995using |
arxiv-668510 | cmp-lg/9501002 | NL Understanding with a Grammar of Constructions | <|reference_start|>NL Understanding with a Grammar of Constructions: We present an approach to natural language understanding based on a computable grammar of constructions. A "construction" consists of a set of features of form and a description of meaning in a context. A grammar is a set of constructions. This kind of grammar is the key element of Mincal, an implemented natural language, speech-enabled interface to an on-line calendar system. The system consists of a NL grammar, a parser, an on-line calendar, a domain knowledge base (about dates, times and meetings), an application knowledge base (about the calendar), a speech recognizer, a speech generator, and the interfaces between those modules. We claim that this architecture should work in general for spoken interfaces in small domains. In this paper we present two novel aspects of the architecture: (a) the use of constructions, integrating descriptions of form, meaning and context into one whole; and (b) the separation of domain knowledge from application knowledge. We describe the data structures for encoding constructions, the structure of the knowledge bases, and the interactions of the key modules of the system.<|reference_end|> | arxiv | @article{zadrozny1995nl,
title={NL Understanding with a Grammar of Constructions},
author={Wlodek Zadrozny, Marcin Szummer, Stanislaw Jarecki, David E. Johnson
and Leora Morgenstern},
journal={arXiv preprint arXiv:cmp-lg/9501002},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9501002},
primaryClass={cmp-lg cs.CL}
} | zadrozny1995nl |
arxiv-668511 | cmp-lg/9501003 | An HPSG Parser Based on Description Logics | <|reference_start|>An HPSG Parser Based on Description Logics: In this paper I present a parser based on Description Logics (DL) for a German HPSG -style fragment. The specified parser relies mainly on the inferential capabilities of the underlying DL system. Given a preferential default extension for DL disambiguation is achieved by choosing the parse containing a qualitatively minimal number of exceptions.<|reference_end|> | arxiv | @article{quantz1995an,
title={An HPSG Parser Based on Description Logics},
author={J. Joachim Quantz (Technische Universit"at Berlin)},
journal={arXiv preprint arXiv:cmp-lg/9501003},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9501003},
primaryClass={cmp-lg cs.CL}
} | quantz1995an |
arxiv-668512 | cmp-lg/9501004 | Lexical Knowledge Representation in an Intelligent Dictionary Help System | <|reference_start|>Lexical Knowledge Representation in an Intelligent Dictionary Help System: The frame-based knowledge representation model adopted in IDHS (Intelligent Dictionary Help System) is described in this paper. It is used to represent the lexical knowledge acquired automatically from a conventional dictionary. Moreover, the enrichment processes that have been performed on the Dictionary Knowledge Base and the dynamic exploitation of this knowledge - both based on the exploitation of the properties of lexical semantic relations - are also described.<|reference_end|> | arxiv | @article{agirre1995lexical,
title={Lexical Knowledge Representation in an Intelligent Dictionary Help
System},
author={E. Agirre, X. Arregi, X. Artola, A. Diaz de Ilarraza, K. Sarasola
(Informatika Fakultatea, Basque Country University)},
journal={Proceedings of COLING 94, Vol. 1, 544-550.},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9501004},
primaryClass={cmp-lg cs.CL}
} | agirre1995lexical |
arxiv-668513 | cmp-lg/9501005 | A Tool for Collecting Domain Dependent Sortal Constraints From Corpora | <|reference_start|>A Tool for Collecting Domain Dependent Sortal Constraints From Corpora: In this paper, we describe a tool designed to generate semi-automatically the sortal constraints specific to a domain to be used in a natural language (NL) understanding system. This tool is evaluated using the SRI Gemini NL understanding system in the ATIS domain.<|reference_end|> | arxiv | @article{andry1995a,
title={A Tool for Collecting Domain Dependent Sortal Constraints From Corpora},
author={Francois Andry, Mark Gawron, John Dowding, and Robert Moore},
journal={arXiv preprint arXiv:cmp-lg/9501005},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9501005},
primaryClass={cmp-lg cs.CL}
} | andry1995a |
arxiv-668514 | cmp-lg/9502001 | Interlingual Lexical Organisation for Multilingual Lexical Databases in NADIA | <|reference_start|>Interlingual Lexical Organisation for Multilingual Lexical Databases in NADIA: We propose a lexical organisation for multilingual lexical databases (MLDB). This organisation is based on acceptions (word-senses). We detail this lexical organisation and show a mock-up built to experiment with it. We also present our current work in defining and prototyping a specialised system for the management of acception-based MLDB. Keywords: multilingual lexical database, acception, linguistic structure.<|reference_end|> | arxiv | @article{serasset1995interlingual,
title={Interlingual Lexical Organisation for Multilingual Lexical Databases in
NADIA},
author={Gilles Serasset (GETA-IMAG, Universite de Grenoble 1 & CNRS)},
journal={arXiv preprint arXiv:cmp-lg/9502001},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502001},
primaryClass={cmp-lg cs.CL}
} | serasset1995interlingual |
arxiv-668515 | cmp-lg/9502002 | Learning Unification-Based Natural Language Grammars | <|reference_start|>Learning Unification-Based Natural Language Grammars: When parsing unrestricted language, wide-covering grammars often undergenerate. Undergeneration can be tackled either by sentence correction, or by grammar correction. This thesis concentrates upon automatic grammar correction (or machine learning of grammar) as a solution to the problem of undergeneration. Broadly speaking, grammar correction approaches can be classified as being either {\it data-driven}, or {\it model-based}. Data-driven learners use data-intensive methods to acquire grammar. They typically use grammar formalisms unsuited to the needs of practical text processing and cannot guarantee that the resulting grammar is adequate for subsequent semantic interpretation. That is, data-driven learners acquire grammars that generate strings that humans would judge to be grammatically ill-formed (they {\it overgenerate}) and fail to assign linguistically plausible parses. Model-based learners are knowledge-intensive and are reliant for success upon the completeness of a {\it model of grammaticality}. But in practice, the model will be incomplete. Given that in this thesis we deal with undergeneration by learning, we hypothesise that the combined use of data-driven and model-based learning would allow data-driven learning to compensate for model-based learning's incompleteness, whilst model-based learning would compensate for data-driven learning's unsoundness. We describe a system that we have used to test the hypothesis empirically. The system combines data-driven and model-based learning to acquire unification-based grammars that are more suitable for practical text parsing. Using the Spoken English Corpus as data, and by quantitatively measuring undergeneration, overgeneration and parse plausibility, we show that this hypothesis is correct.<|reference_end|> | arxiv | @article{osborne1995learning,
title={Learning Unification-Based Natural Language Grammars},
author={Miles Osborne (Dept. of Computer Science, University of York, York,
England)},
journal={arXiv preprint arXiv:cmp-lg/9502002},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502002},
primaryClass={cmp-lg cs.CL}
} | osborne1995learning |
arxiv-668516 | cmp-lg/9502003 | ProFIT: Prolog with Features, Inheritance and Templates | <|reference_start|>ProFIT: Prolog with Features, Inheritance and Templates: ProFIT is an extension of Standard Prolog with Features, Inheritance and Templates. ProFIT allows the programmer or grammar developer to declare an inheritance hierarchy, features and templates. Sorted feature terms can be used in ProFIT programs together with Prolog terms to provide a clearer description language for linguistic structures. ProFIT compiles all sorted feature terms into a Prolog term representation, so that the built-in Prolog term unification can be used for the unification of sorted feature structures, and no special unification algorithm is needed. ProFIT programs are compiled into Prolog programs, so that no meta-interpreter is needed for their execution. ProFIT thus provides a direct step from grammars developed with sorted feature terms to Prolog programs usable for practical NLP systems.<|reference_end|> | arxiv | @article{erbach1995profit:,
title={ProFIT: Prolog with Features, Inheritance and Templates},
author={Gregor Erbach (Universitaet des Saarlandes, Comptational Linguistics
Dept.)},
journal={arXiv preprint arXiv:cmp-lg/9502003},
year={1995},
number={CLAUS Report 42},
archivePrefix={arXiv},
eprint={cmp-lg/9502003},
primaryClass={cmp-lg cs.CL}
} | erbach1995profit: |
arxiv-668517 | cmp-lg/9502004 | Bottom-Up Earley Deduction | <|reference_start|>Bottom-Up Earley Deduction: We propose a bottom-up variant of Earley deduction. Bottom-up deduction is preferable to top-down deduction because it allows incremental processing (even for head-driven grammars), it is data-driven, no subsumption check is needed, and preference values attached to lexical items can be used to guide best-first search. We discuss the scanning step for bottom-up Earley deduction and indexing schemes that help avoid useless deduction steps.<|reference_end|> | arxiv | @article{erbach1995bottom-up,
title={Bottom-Up Earley Deduction},
author={Gregor Erbach (University of the Saarland, Comptational Linguistics
Dept.)},
journal={Proceedings of COLING 94, pages 796-802},
year={1995},
number={CLAUS Report 39},
archivePrefix={arXiv},
eprint={cmp-lg/9502004},
primaryClass={cmp-lg cs.CL}
} | erbach1995bottom-up |
arxiv-668518 | cmp-lg/9502005 | Off-line Optimization for Earley-style HPSG Processing | <|reference_start|>Off-line Optimization for Earley-style HPSG Processing: A novel approach to HPSG based natural language processing is described that uses an off-line compiler to automatically prime a declarative grammar for generation or parsing, and inputs the primed grammar to an advanced Earley-style processor. This way we provide an elegant solution to the problems with empty heads and efficient bidirectional processing which is illustrated for the special case of HPSG generation. Extensive testing with a large HPSG grammar revealed some important constraints on the form of the grammar.<|reference_end|> | arxiv | @article{minnen1995off-line,
title={Off-line Optimization for Earley-style HPSG Processing},
author={Guido Minnen, Dale Gerdemann, Thilo Goetz (SFB340, University of
Tuebingen)},
journal={Proceedings EACL 95},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502005},
primaryClass={cmp-lg cs.CL}
} | minnen1995off-line |
arxiv-668519 | cmp-lg/9502006 | Rapid Development of Morphological Descriptions for Full Language Processing Systems | <|reference_start|>Rapid Development of Morphological Descriptions for Full Language Processing Systems: I describe a compiler and development environment for feature-augmented two-level morphology rules integrated into a full NLP system. The compiler is optimized for a class of languages including many or most European ones, and for rapid development and debugging of descriptions of new languages. The key design decision is to compose morphophonological and morphosyntactic information, but not the lexicon, when compiling the description. This results in typical compilation times of about a minute, and has allowed a reasonably full, feature-based description of French inflectional morphology to be developed in about a month by a linguist new to the system.<|reference_end|> | arxiv | @article{carter1995rapid,
title={Rapid Development of Morphological Descriptions for Full Language
Processing Systems},
author={David Carter (SRI International, Cambridge, UK)},
journal={arXiv preprint arXiv:cmp-lg/9502006},
year={1995},
number={CRC-047; see http://www.cam.sri.com/},
archivePrefix={arXiv},
eprint={cmp-lg/9502006},
primaryClass={cmp-lg cs.CL}
} | carter1995rapid |
arxiv-668520 | cmp-lg/9502007 | Utilization of a Lexicon for Spelling Correction in Modern Greek | <|reference_start|>Utilization of a Lexicon for Spelling Correction in Modern Greek: In this paper we present an interactive spelling correction system for Modern Greek. The entire system is based on a morphological lexicon. Emphasis is given to the development of the lexicon, especially as far as storage economy, speed efficiency and dictionary coverage is concerned. Extensive research was conducted from both the computer engineering and linguisting fields, in order to describe inflectional morphology as economically as possible.<|reference_end|> | arxiv | @article{vagelatos1995utilization,
title={Utilization of a Lexicon for Spelling Correction in Modern Greek},
author={A.Vagelatos, T.Triantopoulou, C.Tsalidis, D.Christodoulakis (Computer
Technology Institute & Computer Eng. Dept. University of Patras)},
journal={arXiv preprint arXiv:cmp-lg/9502007},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502007},
primaryClass={cmp-lg cs.CL}
} | vagelatos1995utilization |
arxiv-668521 | cmp-lg/9502008 | A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System | <|reference_start|>A Robust and Efficient Three-Layered Dialogue Component for a Speech-to-Speech Translation System: We present the dialogue component of the speech-to-speech translation system VERBMOBIL. In contrast to conventional dialogue systems it mediates the dialogue while processing maximally 50% of the dialogue in depth. Special requirements like robustness and efficiency lead to a 3-layered hybrid architecture for the dialogue module, using statistics, an automaton and a planner. A dialogue memory is constructed incrementally.<|reference_end|> | arxiv | @article{alexandersson1995a,
title={A Robust and Efficient Three-Layered Dialogue Component for a
Speech-to-Speech Translation System},
author={Jan Alexandersson, Elisabeth Maier, Norbert Reithinger (DFKI GmbH,
Saarbruecken, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9502008},
year={1995},
number={VERBMOBIL Report No. 50, December 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9502008},
primaryClass={cmp-lg cs.CL}
} | alexandersson1995a |
arxiv-668522 | cmp-lg/9502009 | On Learning More Appropriate Selectional Restrictions | <|reference_start|>On Learning More Appropriate Selectional Restrictions: We present some variations affecting the association measure and thresholding on a technique for learning Selectional Restrictions from on-line corpora. It uses a wide-coverage noun taxonomy and a statistical measure to generalize the appropriate semantic classes. Evaluation measures for the Selectional Restrictions learning task are discussed. Finally, an experimental evaluation of these variations is reported.<|reference_end|> | arxiv | @article{ribas1995on,
title={On Learning More Appropriate Selectional Restrictions},
author={Francesc Ribas (Universitat Politecnica de Catalunya)},
journal={Proceedings EACL-95, Ireland},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502009},
primaryClass={cmp-lg cs.CL}
} | ribas1995on |
arxiv-668523 | cmp-lg/9502010 | NPtool, a detector of English noun phrases | <|reference_start|>NPtool, a detector of English noun phrases: NPtool is a fast and accurate system for extracting noun phrases from English texts for the purposes of e.g. information retrieval, translation unit discovery, and corpus studies. After a general introduction, the system architecture is presented in outline. Then follows an examination of a recently written Constraint Syntax. An evaluation report concludes the paper.<|reference_end|> | arxiv | @article{voutilainen1995nptool,,
title={NPtool, a detector of English noun phrases},
author={Atro Voutilainen (Research Unit for Computational Linguistics,
University of Helsinki, Finland)},
journal={arXiv preprint arXiv:cmp-lg/9502010},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502010},
primaryClass={cmp-lg cs.CL}
} | voutilainen1995nptool, |
arxiv-668524 | cmp-lg/9502011 | Specifying a shallow grammatical representation for parsing purposes | <|reference_start|>Specifying a shallow grammatical representation for parsing purposes: Is it possible to specify a grammatical representation (descriptors and their application guidelines) to such a degree that it can be consistently applied by different grammarians e.g. for producing a benchmark corpus for parser evaluation? Arguments for and against have been given, but very little empirical evidence. In this article we report on a double-blind experiment with a surface-oriented morphosyntactic grammatical representation used in a large-scale English parser. We argue that a consistently applicable representation for morphology and also shallow syntax can be specified. A grammatical representation with a near-100% coverage of running text can be specified with a reasonable effort, especially if the representation is based on structural distinctions (i.e. it is structurally resolvable).<|reference_end|> | arxiv | @article{voutilainen1995specifying,
title={Specifying a shallow grammatical representation for parsing purposes},
author={Atro Voutilainen and Timo Jarvinen (Research Unit for Multilingual
Language Technology, University of Helsinki, Finland)},
journal={arXiv preprint arXiv:cmp-lg/9502011},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502011},
primaryClass={cmp-lg cs.CL}
} | voutilainen1995specifying |
arxiv-668525 | cmp-lg/9502012 | A syntax-based part-of-speech analyser | <|reference_start|>A syntax-based part-of-speech analyser: There are two main methodologies for constructing the knowledge base of a natural language analyser: the linguistic and the data-driven. Recent state-of-the-art part-of-speech taggers are based on the data-driven approach. Because of the known feasibility of the linguistic rule-based approach at related levels of description, the success of the data-driven approach in part-of-speech analysis may appear surprising. In this paper, a case is made for the syntactic nature of part-of-speech tagging. A new tagger of English that uses only linguistic distributional rules is outlined and empirically evaluated. Tested against a benchmark corpus of 38,000 words of previously unseen text, this syntax-based system reaches an accuracy of above 99%. Compared to the 95-97% accuracy of its best competitors, this result suggests the feasibility of the linguistic approach also in part-of-speech analysis.<|reference_end|> | arxiv | @article{voutilainen1995a,
title={A syntax-based part-of-speech analyser},
author={Atro Voutilainen (Research Unit for Multilingual Language Technology,
University of Helsinki, Finland)},
journal={arXiv preprint arXiv:cmp-lg/9502012},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502012},
primaryClass={cmp-lg cs.CL}
} | voutilainen1995a |
arxiv-668526 | cmp-lg/9502013 | Ambiguity resolution in a reductionistic parser | <|reference_start|>Ambiguity resolution in a reductionistic parser: We are concerned with dependency-oriented morphosyntactic parsing of running text. While a parsing grammar should avoid introducing structurally unresolvable distinctions in order to optimise on the accuracy of the parser, it also is beneficial for the grammarian to have as expressive a structural representation available as possible. In a reductionistic parsing system this policy may result in considerable ambiguity in the input; however, even massive ambiguity can be tackled efficiently with an accurate parsing description and effective parsing technology.<|reference_end|> | arxiv | @article{voutilainen1995ambiguity,
title={Ambiguity resolution in a reductionistic parser},
author={Atro Voutilainen and Pasi Tapanainen (Research Unit for Computational
Linguistics, University of Helsinki, Finland)},
journal={arXiv preprint arXiv:cmp-lg/9502013},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502013},
primaryClass={cmp-lg cs.CL}
} | voutilainen1995ambiguity |
arxiv-668527 | cmp-lg/9502014 | Ellipsis and Quantification: a substitutional approach | <|reference_start|>Ellipsis and Quantification: a substitutional approach: The paper describes a substitutional approach to ellipsis resolution giving comparable results to Dalrymple, Shieber and Pereira (1991), but without the need for order-sensitive interleaving of quantifier scoping and ellipsis resolution. It is argued that the order-independence results from viewing semantic interpretation as building a description of a semantic composition, instead of the more common view of interpretation as actually performing the composition<|reference_end|> | arxiv | @article{crouch1995ellipsis,
title={Ellipsis and Quantification: a substitutional approach},
author={Richard Crouch (SRI International, Cambridge, UK)},
journal={arXiv preprint arXiv:cmp-lg/9502014},
year={1995},
number={CRC-054; see http://www.cam.sri.com/},
archivePrefix={arXiv},
eprint={cmp-lg/9502014},
primaryClass={cmp-lg cs.CL}
} | crouch1995ellipsis |
arxiv-668528 | cmp-lg/9502015 | The Semantics of Resource Sharing in Lexical-Functional Grammar | <|reference_start|>The Semantics of Resource Sharing in Lexical-Functional Grammar: We argue that the resource sharing that is commonly manifest in semantic accounts of coordination is instead appropriately handled in terms of structure-sharing in LFG f-structures. We provide an extension to the previous account of LFG semantics (Dalrymple et al., 1993b) according to which dependencies between f-structures are viewed as resources; as a result a one-to-one correspondence between uses of f-structures and meanings is maintained. The resulting system is sufficiently restricted in cases where other approaches overgenerate; the very property of resource-sensitivity for which resource sharing appears to be problematic actually provides explanatory advantages over systems that more freely replicate resources during derivation.<|reference_end|> | arxiv | @article{kehler1995the,
title={The Semantics of Resource Sharing in Lexical-Functional Grammar},
author={Andrew Kehler (Harvard University), Mary Dalrymple (Xerox PARC), John
Lamping (Xerox PARC), Vijay Saraswat (Xerox PARC)},
journal={Proceedings of EACL-95},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502015},
primaryClass={cmp-lg cs.CL}
} | kehler1995the |
arxiv-668529 | cmp-lg/9502016 | Higher-order Linear Logic Programming of Categorial Deduction | <|reference_start|>Higher-order Linear Logic Programming of Categorial Deduction: We show how categorial deduction can be implemented in higher-order (linear) logic programming, thereby realising parsing as deduction for the associative and non-associative Lambek calculi. This provides a method of solution to the parsing problem of Lambek categorial grammar applicable to a variety of its extensions.<|reference_end|> | arxiv | @article{morrill1995higher-order,
title={Higher-order Linear Logic Programming of Categorial Deduction},
author={Glyn Morrill},
journal={arXiv preprint arXiv:cmp-lg/9502016},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502016},
primaryClass={cmp-lg cs.CL}
} | morrill1995higher-order |
arxiv-668530 | cmp-lg/9502017 | Deterministic Consistency Checking of LP Constraints | <|reference_start|>Deterministic Consistency Checking of LP Constraints: We provide a constraint based computational model of linear precedence as employed in the HPSG grammar formalism. An extended feature logic which adds a wide range of constraints involving precedence is described. A sound, complete and terminating deterministic constraint solving procedure is given. Deterministic computational model is achieved by weakening the logic such that it is sufficient for linguistic applications involving word-order.<|reference_end|> | arxiv | @article{manandhar1995deterministic,
title={Deterministic Consistency Checking of LP Constraints},
author={Suresh Manandhar},
journal={arXiv preprint arXiv:cmp-lg/9502017},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502017},
primaryClass={cmp-lg cs.CL}
} | manandhar1995deterministic |
arxiv-668531 | cmp-lg/9502018 | Algorithms for Analysing the Temporal Structure of Discourse | <|reference_start|>Algorithms for Analysing the Temporal Structure of Discourse: We describe a method for analysing the temporal structure of a discourse which takes into account the effects of tense, aspect, temporal adverbials and rhetorical structure and which minimises unnecessary ambiguity in the temporal structure. It is part of a discourse grammar implemented in Carpenter's ALE formalism. The method for building up the temporal structure of the discourse combines constraints and preferences: we use constraints to reduce the number of possible structures, exploiting the HPSG type hierarchy and unification for this purpose; and we apply preferences to choose between the remaining options using a temporal centering mechanism. We end by recommending that an underspecified representation of the structure using these techniques be used to avoid generating the temporal/rhetorical structure until higher-level information can be used to disambiguate.<|reference_end|> | arxiv | @article{hitzeman1995algorithms,
title={Algorithms for Analysing the Temporal Structure of Discourse},
author={Janet Hitzeman, Marc Moens and Claire Grover},
journal={arXiv preprint arXiv:cmp-lg/9502018},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502018},
primaryClass={cmp-lg cs.CL}
} | hitzeman1995algorithms |
arxiv-668532 | cmp-lg/9502019 | Integrating "Free" Word Order Syntax and Information Structure | <|reference_start|>Integrating "Free" Word Order Syntax and Information Structure: This paper describes a combinatory categorial formalism called Multiset-CCG that can capture the syntax and interpretation of ``free'' word order in languages such as Turkish. The formalism compositionally derives the predicate-argument structure and the information structure (e.g. topic, focus) of a sentence in parallel, and uniformly handles word order variation among the arguments and adjuncts within a clause, as well as in complex clauses and across clause boundaries.<|reference_end|> | arxiv | @article{hoffman1995integrating,
title={Integrating "Free" Word Order Syntax and Information Structure},
author={Beryl Hoffman (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9502019},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502019},
primaryClass={cmp-lg cs.CL}
} | hoffman1995integrating |
arxiv-668533 | cmp-lg/9502020 | Formalization and Parsing of Typed Unification-Based ID/LP Grammars | <|reference_start|>Formalization and Parsing of Typed Unification-Based ID/LP Grammars: This paper defines unification based ID/LP grammars based on typed feature structures as nonterminals and proposes a variant of Earley's algorithm to decide whether a given input sentence is a member of the language generated by a particular typed unification ID/LP grammar. A solution to the problem of the nonlocal flow of information in unification ID/LP grammars as discussed in Seiffert (1991) is incorporated into the algorithm. At the same time, it tries to connect this technical work with linguistics by presenting an example of the problem resulting from HPSG approaches to linguistics (Hinrichs and Nakasawa 1994, Richter and Sailer 1995) and with computational linguistics by drawing connections from this approach to systems implementing HPSG, especially the TROLL system, Gerdemann et al. (forthcoming).<|reference_end|> | arxiv | @article{morawietz1995formalization,
title={Formalization and Parsing of Typed Unification-Based ID/LP Grammars},
author={Frank Morawietz (Master's Thesis, University of Tuebingen, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9502020},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502020},
primaryClass={cmp-lg cs.CL}
} | morawietz1995formalization |
arxiv-668534 | cmp-lg/9502021 | A Tractable Extension of Linear Indexed Grammars | <|reference_start|>A Tractable Extension of Linear Indexed Grammars: It has been shown that Linear Indexed Grammars can be processed in polynomial time by exploiting constraints which make possible the extensive use of structure-sharing. This paper describes a formalism that is more powerful than Linear Indexed Grammar, but which can also be processed in polynomial time using similar techniques. The formalism, which we refer to as Partially Linear PATR manipulates feature structures rather than stacks.<|reference_end|> | arxiv | @article{keller1995a,
title={A Tractable Extension of Linear Indexed Grammars},
author={Bill Keller and David Weir (School of Cognitive & Computing Sciences,
University of Sussex, UK)},
journal={arXiv preprint arXiv:cmp-lg/9502021},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502021},
primaryClass={cmp-lg cs.CL}
} | keller1995a |
arxiv-668535 | cmp-lg/9502022 | Stochastic HPSG | <|reference_start|>Stochastic HPSG: In this paper we provide a probabilistic interpretation for typed feature structures very similar to those used by Pollard and Sag. We begin with a version of the interpretation which lacks a treatment of re-entrant feature structures, then provide an extended interpretation which allows them. We sketch algorithms allowing the numerical parameters of our probabilistic interpretations of HPSG to be estimated from corpora.<|reference_end|> | arxiv | @article{brew1995stochastic,
title={Stochastic HPSG},
author={Chris Brew (Human Communication Research Centre, Edinburgh University)},
journal={arXiv preprint arXiv:cmp-lg/9502022},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502022},
primaryClass={cmp-lg cs.CL}
} | brew1995stochastic |
arxiv-668536 | cmp-lg/9502023 | Splitting the Reference Time: Temporal Anaphora and Quantification in DRT | <|reference_start|>Splitting the Reference Time: Temporal Anaphora and Quantification in DRT: This paper presents an analysis of temporal anaphora in sentences which contain quantification over events, within the framework of Discourse Representation Theory. The analysis in (Partee 1984) of quantified sentences, introduced by a temporal connective, gives the wrong truth-conditions when the temporal connective in the subordinate clause is "before" or "after". This problem has been previously analyzed in (de Swart 1991) as an instance of the proportion problem, and given a solution from a Generalized Quantifier approach. By using a careful distinction between the different notions of reference time, based on (Kamp and Reyle 1993), we propose a solution to this problem, within the framework of DRT. We show some applications of this solution to additional temporal anaphora phenomena in quantified sentences.<|reference_end|> | arxiv | @article{nelken1995splitting,
title={Splitting the Reference Time: Temporal Anaphora and Quantification in
DRT},
author={Rani Nelken (Tel Aviv University, Israel) and Nissim Francez (The
Technion, Israel)},
journal={arXiv preprint arXiv:cmp-lg/9502023},
year={1995},
number={#LCL 94-10},
archivePrefix={arXiv},
eprint={cmp-lg/9502023},
primaryClass={cmp-lg cs.CL}
} | nelken1995splitting |
arxiv-668537 | cmp-lg/9502024 | A Robust Parser Based on Syntactic Information | <|reference_start|>A Robust Parser Based on Syntactic Information: In this paper, we propose a robust parser which can parse extragrammatical sentences. This parser can recover them using only syntactic information. It can be easily modified and extended because it utilize only syntactic information.<|reference_end|> | arxiv | @article{lee1995a,
title={A Robust Parser Based on Syntactic Information},
author={Kong Joo Lee, Cheol Jung Kweon, Jungyun Seo and Gil Chang Kim},
journal={arXiv preprint arXiv:cmp-lg/9502024},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502024},
primaryClass={cmp-lg cs.CL}
} | lee1995a |
arxiv-668538 | cmp-lg/9502025 | Principle Based Semantics for HPSG | <|reference_start|>Principle Based Semantics for HPSG: The paper presents a constraint based semantic formalism for HPSG. The syntax-semantics interface directly implements syntactic conditions on quantifier scoping and distributivity. The construction of semantic representations is guided by general principles governing the interaction between syntax and semantics. Each of these principles acts as a constraint to narrow down the set of possible interpretations of a sentence. Meanings of ambiguous sentences are represented by single partial representations (so-called U(nderspecified) D(iscourse) R(epresentation) S(tructure)s) to which further constraints can be added monotonically to gain more information about the content of a sentence. There is no need to build up a large number of alternative representations of the sentence which are then filtered by subsequent discourse and world knowledge. The advantage of UDRSs is not only that they allow for monotonic incremental interpretation but also that they are equipped with truth conditions and a proof theory that allows for inferences to be drawn directly on structures where quantifier scope is not resolved.<|reference_end|> | arxiv | @article{frank1995principle,
title={Principle Based Semantics for HPSG},
author={Anette Frank and Uwe Reyle (Institute for Computational Linguistics
University of Stuttgart)},
journal={arXiv preprint arXiv:cmp-lg/9502025},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502025},
primaryClass={cmp-lg cs.CL}
} | frank1995principle |
arxiv-668539 | cmp-lg/9502026 | On Reasoning with Ambiguities | <|reference_start|>On Reasoning with Ambiguities: The paper adresses the problem of reasoning with ambiguities. Semantic representations are presented that leave scope relations between quantifiers and/or other operators unspecified. Truth conditions are provided for these representations and different consequence relations are judged on the basis of intuitive correctness. Finally inference patterns are presented that operate directly on these underspecified structures, i.e. do not rely on any translation into the set of their disambiguations.<|reference_end|> | arxiv | @article{reyle1995on,
title={On Reasoning with Ambiguities},
author={Uwe Reyle (Institute for Computational Linguistics, University of
Stuttgart)},
journal={arXiv preprint arXiv:cmp-lg/9502026},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502026},
primaryClass={cmp-lg cs.CL}
} | reyle1995on |
arxiv-668540 | cmp-lg/9502027 | Towards an Account of Extraposition in HPSG | <|reference_start|>Towards an Account of Extraposition in HPSG: This paper investigates the syntax of extraposition in the HPSG framework. We present English and German data (partly taken from corpora), and provide an analysis using lexical rules and a nonlocal dependency. The condition for binding this dependency is formulated relative to the antecedent of the extraposed phrase, which entails that no fixed site for extraposition exists. Our analysis accounts for the interaction of extraposition with fronting and coordination, and predicts constraints on multiple extraposition.<|reference_end|> | arxiv | @article{keller1995towards,
title={Towards an Account of Extraposition in HPSG},
author={Frank Keller (Centre for Cognitive Science, University of Edinburgh)},
journal={Proceedings of the EACL-95, Dublin},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502027},
primaryClass={cmp-lg cs.CL}
} | keller1995towards |
arxiv-668541 | cmp-lg/9502028 | Lexical Acquisition via Constraint Solving | <|reference_start|>Lexical Acquisition via Constraint Solving: This paper describes a method to automatically acquire the syntactic and semantic classifications of unknown words. Our method reduces the search space of the lexical acquisition problem by utilizing both the left and the right context of the unknown word. Link Grammar provides a convenient framework in which to implement our method.<|reference_end|> | arxiv | @article{pedersen1995lexical,
title={Lexical Acquisition via Constraint Solving},
author={Ted Pedersen and Weidong Chen (Southern Methodist University)},
journal={arXiv preprint arXiv:cmp-lg/9502028},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502028},
primaryClass={cmp-lg cs.CL}
} | pedersen1995lexical |
arxiv-668542 | cmp-lg/9502029 | Topic Identification in Discourse | <|reference_start|>Topic Identification in Discourse: This paper proposes a corpus-based language model for topic identification. We analyze the association of noun-noun and noun-verb pairs in LOB Corpus. The word association norms are based on three factors: 1) word importance, 2) pair co-occurrence, and 3) distance. They are trained on the paragraph and sentence levels for noun-noun and noun-verb pairs, respectively. Under the topic coherence postulation, the nouns that have the strongest connectivities with the other nouns and verbs in the discourse form the preferred topic set. The collocational semantics then is used to identify the topics from paragraphs and to discuss the topic shift phenomenon among paragraphs.<|reference_end|> | arxiv | @article{chen1995topic,
title={Topic Identification in Discourse},
author={Kuang-hua Chen (Department of Computer Science and Information
Engineering, National Taiwan University)},
journal={arXiv preprint arXiv:cmp-lg/9502029},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502029},
primaryClass={cmp-lg cs.CL}
} | chen1995topic |
arxiv-668543 | cmp-lg/9502030 | Bi-directional memory-based dialog translation: The KEMDT approach | <|reference_start|>Bi-directional memory-based dialog translation: The KEMDT approach: A bi-directional Korean/English dialog translation system is designed and implemented using the memory-based translation technique. The system KEMDT (Korean/English Memory-based Dialog Translation system) can perform Korean to English, and English to Korean translation using unified memory network and extended marker passing algorithm. We resolve the word order variation and frequent word omission problems in Korean by classifying the concept sequence element in four different types and extending the marker- passing-based-translation algorithm. Unlike the previous memory-based translation systems, the KEMDT system develops the bilingual memory network and the unified bi-directional marker passing translation algorithm. For efficient language specific processing, we separate the morphological processors from the memory-based translator. The KEMDT technology provides a hierarchical memory network and an efficient marker-based control for the recent example-based MT paradigm.<|reference_end|> | arxiv | @article{lee1995bi-directional,
title={Bi-directional memory-based dialog translation: The KEMDT approach},
author={Geunbae Lee, Hanmin Jung, Jong-Hyeok Lee (Dept. of comp. sci, POSTECH,
Korea.)},
journal={arXiv preprint arXiv:cmp-lg/9502030},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502030},
primaryClass={cmp-lg cs.CL}
} | lee1995bi-directional |
arxiv-668544 | cmp-lg/9502031 | Cooperative Error Handling and Shallow Processing | <|reference_start|>Cooperative Error Handling and Shallow Processing: This paper is concerned with the detection and correction of sub-sentential English text errors. Previous spelling programs, unless restricted to a very small set of words, have operated as post-processors. And to date, grammar checkers and other programs which deal with ill-formed input usually step directly from spelling considerations to a full-scale parse, assuming a complete sentence. Work described below is aimed at evaluating the effectiveness of shallow (sub-sentential) processing and the feasibility of cooperative error checking, through building and testing appropriately an error-processing system. A system under construction is outlined which incorporates morphological checks (using new two-level error rules) over a directed letter graph, tag positional trigrams and partial parsing. Intended testing is discussed.<|reference_end|> | arxiv | @article{bowden1995cooperative,
title={Cooperative Error Handling and Shallow Processing},
author={Tanya Bowden (Computer Lab, University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9502031},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502031},
primaryClass={cmp-lg cs.CL}
} | bowden1995cooperative |
arxiv-668545 | cmp-lg/9502032 | An NLP Approach to a Specific Type of Texts: Car Accident Reports | <|reference_start|>An NLP Approach to a Specific Type of Texts: Car Accident Reports: The work reported here is the result of a study done within a larger project on the ``Semantics of Natural Languages'' viewed from the field of Artificial Intelligence and Computational Linguistics. In this project, we have chosen a corpus of insurance claim reports. These texts deal with a relatively circumscribed domain, that of road traffic, thereby limiting the extra-linguistic knowledge necessary to understand them. Moreover, these texts present a number of very specific characteristics, insofar as they are written in a quasi-institutional setting which imposes many constraints on their production. We first determine what these constraints are in order to then show how they provide the writer with the means to create as succint a text as possible, and in a symmetric way, how they provide the reader with the means to interpret the text and to distinguish between its factual and argumentative aspects.<|reference_end|> | arxiv | @article{estival1995an,
title={An NLP Approach to a Specific Type of Texts: Car Accident Reports},
author={Dominique Estival (ISSCO, Universite de Geneve) and Francoise Gayral
(LIPN, Universite Paris-Nord)},
journal={arXiv preprint arXiv:cmp-lg/9502032},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502032},
primaryClass={cmp-lg cs.CL}
} | estival1995an |
arxiv-668546 | cmp-lg/9502033 | An Algorithm to Co-Ordinate Anaphora Resolution and PPS Disambiguation Process | <|reference_start|>An Algorithm to Co-Ordinate Anaphora Resolution and PPS Disambiguation Process: This paper concerns both anaphora resolution and prepositional phrase (PP) attachment that are the most frequent ambiguities in natural language processing. Several methods have been proposed to deal with each phenomenon separately, however none of proposed systems has considered the way of dealing both phenomena. We tackle this issue, proposing an algorithm to co-ordinate the treatment of these two problems efficiently, i.e., the aim is also to exploit at each step all the results that each component can provide.<|reference_end|> | arxiv | @article{azzam1995an,
title={An Algorithm to Co-Ordinate Anaphora Resolution and PPS Disambiguation
Process},
author={Saliha Azzam (University of La Sorbonne)},
journal={arXiv preprint arXiv:cmp-lg/9502033},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502033},
primaryClass={cmp-lg cs.CL}
} | azzam1995an |
arxiv-668547 | cmp-lg/9502034 | Grouping Words Using Statistical Context | <|reference_start|>Grouping Words Using Statistical Context: This paper (cmp-lg/yymmnnn) has been accepted for publication in the student session of EACL-95. It outlines ongoing work using statistical and unsupervised neural network methods for clustering words in untagged corpora. Such approaches are of interest when attempting to understand the development of human intuitive categorization of language as well as for trying to improve computational methods in natural language understanding. Some preliminary results using a simple statistical approach are described, along with work using an unsupervised neural network to distinguish between the sense classes into which words fall.<|reference_end|> | arxiv | @article{huckle1995grouping,
title={Grouping Words Using Statistical Context},
author={Christopher C. Huckle (University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9502034},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502034},
primaryClass={cmp-lg cs.CL}
} | huckle1995grouping |
arxiv-668548 | cmp-lg/9502035 | Incorporating "Unconscious Reanalysis" into an Incremental, Monotonic Parser | <|reference_start|>Incorporating "Unconscious Reanalysis" into an Incremental, Monotonic Parser: This paper describes an implementation based on a recent model in the psycholinguistic literature. We define a parsing operation which allows the reanalysis of dependencies within an incremental and monotonic processing architecture, and discuss search strategies for its application in a head-initial language (English) and a head-final language (Japanese).<|reference_end|> | arxiv | @article{sturt1995incorporating,
title={Incorporating "Unconscious Reanalysis" into an Incremental, Monotonic
Parser},
author={Patrick Sturt (Centre for Cognitive Science, University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9502035},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502035},
primaryClass={cmp-lg cs.CL}
} | sturt1995incorporating |
arxiv-668549 | cmp-lg/9502036 | Literal Movement Grammars | <|reference_start|>Literal Movement Grammars: Literal movement grammars (LMGs) provide a general account of extraposition phenomena through an attribute mechanism allowing top-down displacement of syntactical information. LMGs provide a simple and efficient treatment of complex linguistic phenomena such as cross-serial dependencies in German and Dutch---separating the treatment of natural language into a parsing phase closely resembling traditional context-free treatment, and a disambiguation phase which can be carried out using matching, as opposed to full unification employed in most current grammar formalisms of linguistical relevance.<|reference_end|> | arxiv | @article{groenink1995literal,
title={Literal Movement Grammars},
author={Annius V. Groenink (CWI, Amsterdam)},
journal={arXiv preprint arXiv:cmp-lg/9502036},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502036},
primaryClass={cmp-lg cs.CL}
} | groenink1995literal |
arxiv-668550 | cmp-lg/9502037 | A State-Transition Grammar for Data-Oriented Parsing | <|reference_start|>A State-Transition Grammar for Data-Oriented Parsing: This paper presents a grammar formalism designed for use in data-oriented approaches to language processing. The formalism is best described as a right-linear indexed grammar extended in linguistically interesting ways. The paper goes on to investigate how a corpus pre-parsed with this formalism may be processed to provide a probabilistic language model for use in the parsing of fresh texts.<|reference_end|> | arxiv | @article{tugwell1995a,
title={A State-Transition Grammar for Data-Oriented Parsing},
author={David Tugwell (University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9502037},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502037},
primaryClass={cmp-lg cs.CL}
} | tugwell1995a |
arxiv-668551 | cmp-lg/9502038 | Implementation and evaluation of a German HMM for POS disambiguation | <|reference_start|>Implementation and evaluation of a German HMM for POS disambiguation: A German language model for the Xerox HMM tagger is presented. This model's performance is compared with two other German taggers with partial parameter re-estimation and full adaption of parameters from pre-tagged corpora. The ambiguity types resolved by this model are analysed and compared to ambiguity types of English and French. Finally, the model's error types are described. I argue that although the overall performance of these models for German is comparable to results for English and French, a more exact analysis demonstrates important differences in the types of disambiguation involved for German.<|reference_end|> | arxiv | @article{feldweg1995implementation,
title={Implementation and evaluation of a German HMM for POS disambiguation},
author={Helmut Feldweg (University of Tuebingen)},
journal={Proceedings of the ACL SIGDAT Workshop, Dublin 1995},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502038},
primaryClass={cmp-lg cs.CL}
} | feldweg1995implementation |
arxiv-668552 | cmp-lg/9502039 | Multilingual Sentence Categorization according to Language | <|reference_start|>Multilingual Sentence Categorization according to Language: In this paper, we describe an approach to sentence categorization which has the originality to be based on natural properties of languages with no training set dependency. The implementation is fast, small, robust and textual errors tolerant. Tested for french, english, spanish and german discrimination, the system gives very interesting results, achieving in one test 99.4% correct assignments on real sentences. The resolution power is based on grammatical words (not the most common words) and alphabet. Having the grammatical words and the alphabet of each language at its disposal, the system computes for each of them its likelihood to be selected. The name of the language having the optimum likelihood will tag the sentence --- but non resolved ambiguities will be maintained. We will discuss the reasons which lead us to use these linguistic facts and present several directions to improve the system's classification performance. Categorization sentences with linguistic properties shows that difficult problems have sometimes simple solutions.<|reference_end|> | arxiv | @article{giguet1995multilingual,
title={Multilingual Sentence Categorization according to Language},
author={Emmanuel Giguet},
journal={Eacl 95 SIGDAT Workshop : From text to tags},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9502039},
primaryClass={cmp-lg cs.CL}
} | giguet1995multilingual |
arxiv-668553 | cmp-lg/9503001 | Using a Corpus for Teaching Turkish Morphology | <|reference_start|>Using a Corpus for Teaching Turkish Morphology: This paper reports on the preliminary phase of our ongoing research towards developing an intelligent tutoring environment for Turkish grammar. One of the components of this environment is a corpus search tool which, among other aspects of the language, will be used to present the learner sample sentences along with their morphological analyses. Following a brief introduction to the Turkish language and its morphology, the paper describes the morphological analysis and ambiguity resolution used to construct the corpus used in the search tool. Finally, implementation issues and details involving the user interface of the tool are discussed.<|reference_end|> | arxiv | @article{guvenir1995using,
title={Using a Corpus for Teaching Turkish Morphology},
author={H. Altay Guvenir, Kemal Oflazer (Bilkent University, Ankara, Turkey)},
journal={arXiv preprint arXiv:cmp-lg/9503001},
year={1995},
number={Bilkent University CS Dept Tech Report BU-CEIS-9423},
archivePrefix={arXiv},
eprint={cmp-lg/9503001},
primaryClass={cmp-lg cs.CL}
} | guvenir1995using |
arxiv-668554 | cmp-lg/9503002 | Computational dialectology in Irish Gaelic | <|reference_start|>Computational dialectology in Irish Gaelic: Dialect groupings can be discovered objectively and automatically by cluster analysis of phonetic transcriptions such as those found in a linguistic atlas. The first step in the analysis, the computation of linguistic distance between each pair of sites, can be computed as Levenshtein distance between phonetic strings. This correlates closely with the much more laborious technique of determining and counting isoglosses, and is more accurate than the more familiar metric of computing Hamming distance based on whether vocabulary entries match. In the actual clustering step, traditional agglomerative clustering works better than the top-down technique of partitioning around medoids. When agglomerative clustering of phonetic string comparison distances is applied to Gaelic, reasonable dialect boundaries are obtained, corresponding to national and (within Ireland) provincial boundaries.<|reference_end|> | arxiv | @article{kessler1995computational,
title={Computational dialectology in Irish Gaelic},
author={Brett Kessler (Stanford University)},
journal={arXiv preprint arXiv:cmp-lg/9503002},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503002},
primaryClass={cmp-lg cs.CL}
} | kessler1995computational |
arxiv-668555 | cmp-lg/9503003 | Tagging French -- comparing a statistical and a constraint-based method | <|reference_start|>Tagging French -- comparing a statistical and a constraint-based method: In this paper we compare two competing approaches to part-of-speech tagging, statistical and constraint-based disambiguation, using French as our test language. We imposed a time limit on our experiment: the amount of time spent on the design of our constraint system was about the same as the time we used to train and test the easy-to-implement statistical model. We describe the two systems and compare the results. The accuracy of the statistical method is reasonably good, comparable to taggers for English. But the constraint-based tagger seems to be superior even with the limited time we allowed ourselves for rule development.<|reference_end|> | arxiv | @article{chanod1995tagging,
title={Tagging French -- comparing a statistical and a constraint-based method},
author={Jean-Pierre Chanod and Pasi Tapanainen (Rank Xerox Research Centre,
Grenoble Laboratory)},
journal={Seventh Conference of the European Chapter of the ACL (EACL95).
149-156. ACL, Dublin, 1995.},
year={1995},
number={see also technical report MLTT-016 at
http://www.xerox.fr/grenoble/mltt/reports/mltt-016.ps},
archivePrefix={arXiv},
eprint={cmp-lg/9503003},
primaryClass={cmp-lg cs.CL}
} | chanod1995tagging |
arxiv-668556 | cmp-lg/9503004 | Creating a tagset, lexicon and guesser for a French tagger | <|reference_start|>Creating a tagset, lexicon and guesser for a French tagger: We earlier described two taggers for French, a statistical one and a constraint-based one. The two taggers have the same tokeniser and morphological analyser. In this paper, we describe aspects of this work concerned with the definition of the tagset, the building of the lexicon, derived from an existing two-level morphological analyser, and the definition of a lexical transducer for guessing unknown words.<|reference_end|> | arxiv | @article{chanod1995creating,
title={Creating a tagset, lexicon and guesser for a French tagger},
author={Jean-Pierre Chanod and Pasi Tapanainen (Rank Xerox Research Centre,
Grenoble Laboratory)},
journal={ACL SIGDAT workshop: From Texts To Tags: Issues In Multilingual
Language Analysis. 58-64. University College Dublin, Ireland, 1995.},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503004},
primaryClass={cmp-lg cs.CL}
} | chanod1995creating |
arxiv-668557 | cmp-lg/9503005 | A specification language for Lexical Functional Grammars | <|reference_start|>A specification language for Lexical Functional Grammars: This paper defines a language L for specifying LFG grammars. This enables constraints on LFG's composite ontology (c-structures synchronised with f-structures) to be stated directly; no appeal to the LFG construction algorithm is needed. We use L to specify schemata annotated rules and the LFG uniqueness, completeness and coherence principles. Broader issues raised by this work are noted and discussed.<|reference_end|> | arxiv | @article{blackburn1995a,
title={A specification language for Lexical Functional Grammars},
author={Patrick Blackburn and Claire Gardent (University of Saarbruecken)},
journal={arXiv preprint arXiv:cmp-lg/9503005},
year={1995},
number={CLAUS Report Nr. 51},
archivePrefix={arXiv},
eprint={cmp-lg/9503005},
primaryClass={cmp-lg cs.CL}
} | blackburn1995a |
arxiv-668558 | cmp-lg/9503006 | ParseTalk about Sentence- and Text-Level Anaphora | <|reference_start|>ParseTalk about Sentence- and Text-Level Anaphora: We provide a unified account of sentence-level and text-level anaphora within the framework of a dependency-based grammar model. Criteria for anaphora resolution within sentence boundaries rephrase major concepts from GB's binding theory, while those for text-level anaphora incorporate an adapted version of a Grosz-Sidner-style focus model.<|reference_end|> | arxiv | @article{strube1995parsetalk,
title={ParseTalk about Sentence- and Text-Level Anaphora},
author={Michael Strube and Udo Hahn (Computational Lingusitic Research Group,
Freiburg University, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9503006},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503006},
primaryClass={cmp-lg cs.CL}
} | strube1995parsetalk |
arxiv-668559 | cmp-lg/9503007 | The Semantics of Motion | <|reference_start|>The Semantics of Motion: In this paper we present a semantic study of motion complexes (ie. of a motion verb followed by a spatial preposition). We focus on the spatial and the temporal intrinsic semantic properties of the motion verbs, on the one hand, and of the spatial prepositions, on the other hand. Then, we address the problem of combining these basic semantics in order to formally and automatically derive the spatiotemporal semantics of a motion complex from the spatiotemporal properties of its components.<|reference_end|> | arxiv | @article{sablayrolles1995the,
title={The Semantics of Motion},
author={Pierre Sablayrolles (IRIT, University Paul Sabatier, Toulouse, FRANCE)},
journal={Proceedings of the EACL95 Dublin},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503007},
primaryClass={cmp-lg cs.CL}
} | sablayrolles1995the |
arxiv-668560 | cmp-lg/9503008 | Ellipsis and Higher-Order Unification | <|reference_start|>Ellipsis and Higher-Order Unification: We present a new method for characterizing the interpretive possibilities generated by elliptical constructions in natural language. Unlike previous analyses, which postulate ambiguity of interpretation or derivation in the full clause source of the ellipsis, our analysis requires no such hidden ambiguity. Further, the analysis follows relatively directly from an abstract statement of the ellipsis interpretation problem. It predicts correctly a wide range of interactions between ellipsis and other semantic phenomena such as quantifier scope and bound anaphora. Finally, although the analysis itself is stated nonprocedurally, it admits of a direct computational method for generating interpretations.<|reference_end|> | arxiv | @article{dalrymple1995ellipsis,
title={Ellipsis and Higher-Order Unification},
author={Mary Dalrymple (Xerox PARC), Stuart M. Shieber (Harvard University),
and Fernando C. N. Pereira (AT&T Bell Labs)},
journal={Linguistics and Philosophy 14(4):399-452},
year={1995},
number={CSLI-19-91 and Xerox SSL-91-105},
archivePrefix={arXiv},
eprint={cmp-lg/9503008},
primaryClass={cmp-lg cs.CL}
} | dalrymple1995ellipsis |
arxiv-668561 | cmp-lg/9503009 | Distributional Part-of-Speech Tagging | <|reference_start|>Distributional Part-of-Speech Tagging: This paper presents an algorithm for tagging words whose part-of-speech properties are unknown. Unlike previous work, the algorithm categorizes word tokens in context instead of word types. The algorithm is evaluated on the Brown Corpus.<|reference_end|> | arxiv | @article{schuetze1995distributional,
title={Distributional Part-of-Speech Tagging},
author={Hinrich Schuetze (CSLI, Stanford University)},
journal={EACL 95},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503009},
primaryClass={cmp-lg cs.CL}
} | schuetze1995distributional |
arxiv-668562 | cmp-lg/9503010 | Corpus-based Method for Automatic Identification of Support Verbs for Nominalizations | <|reference_start|>Corpus-based Method for Automatic Identification of Support Verbs for Nominalizations: Nominalization is a highly productive phenomena in most languages. The process of nominalization ejects a verb from its syntactic role into a nominal position. The original verb is often replaced by a semantically emptied support verb (e.g., "make a proposal"). The choice of a support verb for a given nominalization is unpredictable, causing a problem for language learners as well as for natural language processing systems. We present here a method of discovering support verbs from an untagged corpus via low-level syntactic processing and comparison of arguments attached to verbal forms and potential nominalized forms. The result of the process is a list of potential support verbs for the nominalized form of a given predicate.<|reference_end|> | arxiv | @article{grefenstette1995corpus-based,
title={Corpus-based Method for Automatic Identification of Support Verbs for
Nominalizations},
author={Gregory Grefenstette (Rank Xerox Research Centre), Simone Teufel
(Universitat Stuttgart, Institut fur maschinelle Sprachverarbeitung)},
journal={arXiv preprint arXiv:cmp-lg/9503010},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503010},
primaryClass={cmp-lg cs.CL}
} | grefenstette1995corpus-based |
arxiv-668563 | cmp-lg/9503011 | Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies | <|reference_start|>Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies: An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have shown that the new models improve model performance.<|reference_end|> | arxiv | @article{mcmahon1995improving,
title={Improving Statistical Language Model Performance with Automatically
Generated Word Hierarchies},
author={John McMahon and F.J.Smith (Queen's University, Belfast)},
journal={arXiv preprint arXiv:cmp-lg/9503011},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503011},
primaryClass={cmp-lg cs.CL}
} | mcmahon1995improving |
arxiv-668564 | cmp-lg/9503012 | A Note on Zipf's Law, Natural Languages, and Noncoding DNA regions | <|reference_start|>A Note on Zipf's Law, Natural Languages, and Noncoding DNA regions: In Phys. Rev. Letters (73:2, 5 Dec. 94), Mantegna et al. conclude on the basis of Zipf rank frequency data that noncoding DNA sequence regions are more like natural languages than coding regions. We argue on the contrary that an empirical fit to Zipf's ``law'' cannot be used as a criterion for similarity to natural languages. Although DNA is a presumably an ``organized system of signs'' in Mandelbrot's (1961) sense, an observation of statistical features of the sort presented in the Mantegna et al. paper does not shed light on the similarity between DNA's ``grammar'' and natural language grammars, just as the observation of exact Zipf-like behavior cannot distinguish between the underlying processes of tossing an $M$ sided die or a finite-state branching process.<|reference_end|> | arxiv | @article{niyogi1995a,
title={A Note on Zipf's Law, Natural Languages, and Noncoding DNA regions},
author={Partha Niyogi and Robert C. Berwick (MIT)},
journal={arXiv preprint arXiv:cmp-lg/9503012},
year={1995},
number={MIT CBCL Memo No. 118},
archivePrefix={arXiv},
eprint={cmp-lg/9503012},
primaryClass={cmp-lg cs.CL q-bio}
} | niyogi1995a |
arxiv-668565 | cmp-lg/9503013 | Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics | <|reference_start|>Incremental Interpretation: Applications, Theory, and Relationship to Dynamic Semantics: Why should computers interpret language incrementally? In recent years psycholinguistic evidence for incremental interpretation has become more and more compelling, suggesting that humans perform semantic interpretation before constituent boundaries, possibly word by word. However, possible computational applications have received less attention. In this paper we consider various potential applications, in particular graphical interaction and dialogue. We then review the theoretical and computational tools available for mapping from fragments of sentences to fully scoped semantic representations. Finally, we tease apart the relationship between dynamic semantics and incremental interpretation.<|reference_end|> | arxiv | @article{milward1995incremental,
title={Incremental Interpretation: Applications, Theory, and Relationship to
Dynamic Semantics},
author={David Milward and Robin Cooper (Centre for Cognitive Science,
University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9503013},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503013},
primaryClass={cmp-lg cs.CL}
} | milward1995incremental |
arxiv-668566 | cmp-lg/9503014 | Non-Constituent Coordination: Theory and Practice | <|reference_start|>Non-Constituent Coordination: Theory and Practice: Despite the large amount of theoretical work done on non-constituent coordination during the last two decades, many computational systems still treat coordination using adapted parsing strategies, in a similar fashion to the SYSCONJ system developed for ATNs. This paper reviews the theoretical literature, and shows why many of the theoretical accounts actually have worse coverage than accounts based on processing. Finally, it shows how processing accounts can be described formally and declaratively in terms of Dynamic Grammars.<|reference_end|> | arxiv | @article{milward1995non-constituent,
title={Non-Constituent Coordination: Theory and Practice},
author={David Milward (Centre for Cognitive Science, University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9503014},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503014},
primaryClass={cmp-lg cs.CL}
} | milward1995non-constituent |
arxiv-668567 | cmp-lg/9503015 | Incremental Interpretation of Categorial Grammar | <|reference_start|>Incremental Interpretation of Categorial Grammar: The paper describes a parser for Categorial Grammar which provides fully word by word incremental interpretation. The parser does not require fragments of sentences to form constituents, and thereby avoids problems of spurious ambiguity. The paper includes a brief discussion of the relationship between basic Categorial Grammar and other formalisms such as HPSG, Dependency Grammar and the Lambek Calculus. It also includes a discussion of some of the issues which arise when parsing lexicalised grammars, and the possibilities for using statistical techniques for tuning to particular languages.<|reference_end|> | arxiv | @article{milward1995incremental,
title={Incremental Interpretation of Categorial Grammar},
author={David Milward (Centre for Cognitive Science, University of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9503015},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503015},
primaryClass={cmp-lg cs.CL}
} | milward1995incremental |
arxiv-668568 | cmp-lg/9503016 | Natural Language Interfaces to Databases - An Introduction | <|reference_start|>Natural Language Interfaces to Databases - An Introduction: This paper is an introduction to natural language interfaces to databases (NLIDBs). A brief overview of the history of NLIDBs is first given. Some advantages and disadvantages of NLIDBs are then discussed, comparing NLIDBs to formal query languages, form-based interfaces, and graphical interfaces. An introduction to some of the linguistic problems NLIDBs have to confront follows, for the benefit of readers less familiar with computational linguistics. The discussion then moves on to NLIDB architectures, portability issues, restricted natural language input systems (including menu-based NLIDBs), and NLIDBs with reasoning capabilities. Some less explored areas of NLIDB research are then presented, namely database updates, meta-knowledge questions, temporal questions, and multi-modal NLIDBs. The paper ends with reflections on the current state of the art.<|reference_end|> | arxiv | @article{androutsopoulos1995natural,
title={Natural Language Interfaces to Databases - An Introduction},
author={I.Androutsopoulos (Dept.of Artificial Intelligence, Univ.of
Edinburgh), G.D.Ritchie (Dept.of Artificial Intelligence, Univ.of Edinburgh),
P.Thanisch (Dept.of Computer Science, Univ.of Edinburgh)},
journal={Natural Language Engineering 1:1, 29-81},
year={1995},
number={DAI RP-709},
archivePrefix={arXiv},
eprint={cmp-lg/9503016},
primaryClass={cmp-lg cs.CL}
} | androutsopoulos1995natural |
arxiv-668569 | cmp-lg/9503017 | Redundancy in Collaborative Dialogue | <|reference_start|>Redundancy in Collaborative Dialogue: In dialogues in which both agents are autonomous, each agent deliberates whether to accept or reject the contributions of the current speaker. A speaker cannot simply assume that a proposal or an assertion will be accepted. However, an examination of a corpus of naturally-occurring problem-solving dialogues shows that agents often do not explicitly indicate acceptance or rejection. Rather the speaker must infer whether the hearer understands and accepts the current contribution based on indirect evidence provided by the hearer's next dialogue contribution. In this paper, I propose a model of the role of informationally redundant utterances in providing evidence to support inferences about mutual understanding and acceptance. The model (1) requires a theory of mutual belief that supports mutual beliefs of various strengths; (2) explains the function of a class of informationally redundant utterances that cannot be explained by other accounts; and (3) contributes to a theory of dialogue by showing how mutual beliefs can be inferred in the absence of the master-slave assumption.<|reference_end|> | arxiv | @article{walker1995redundancy,
title={Redundancy in Collaborative Dialogue},
author={Marilyn A. Walker (University of Pennsylvania)},
journal={Fourteenth International Conference on Computational Linguistics,
Nantes, 1992},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503017},
primaryClass={cmp-lg cs.CL}
} | walker1995redundancy |
arxiv-668570 | cmp-lg/9503018 | Discourse and Deliberation: Testing a Collaborative Strategy | <|reference_start|>Discourse and Deliberation: Testing a Collaborative Strategy: A discourse strategy is a strategy for communicating with another agent. Designing effective dialogue systems requires designing agents that can choose among discourse strategies. We claim that the design of effective strategies must take cognitive factors into account, propose a new method for testing the hypothesized factors, and present experimental results on an effective strategy for supporting deliberation. The proposed method of computational dialogue simulation provides a new empirical basis for computational linguistics.<|reference_end|> | arxiv | @article{walker1995discourse,
title={Discourse and Deliberation: Testing a Collaborative Strategy},
author={Marilyn A. Walker (Mitsubishi Electric Research Laboratories)},
journal={Fifteenth International Conference on Computational Linguistics,
Kyoto},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503018},
primaryClass={cmp-lg cs.CL}
} | walker1995discourse |
arxiv-668571 | cmp-lg/9503019 | SATZ - An Adaptive Sentence Segmentation System | <|reference_start|>SATZ - An Adaptive Sentence Segmentation System: This paper provides a detailed description of the sentence segmentation system first introduced in cmp-lg/9411022. It provides results of systematic experiments involving sentence boundary determination, including context size, lexicon size, and single-case texts. Also included are the results of successfully adapting the system to German and French. The source code for the system is available as a compressed tar file at ftp://cs-tr.CS.Berkeley.EDU/pub/cstr/satz.tar.Z .<|reference_end|> | arxiv | @article{palmer1995satz,
title={SATZ - An Adaptive Sentence Segmentation System},
author={David D. Palmer (University of California at Berkeley)},
journal={arXiv preprint arXiv:cmp-lg/9503019},
year={1995},
number={UC Berkeley Technical Report UCB/CSD-94-846},
archivePrefix={arXiv},
eprint={cmp-lg/9503019},
primaryClass={cmp-lg cs.CL}
} | palmer1995satz |
arxiv-668572 | cmp-lg/9503020 | Different Issues in the Design of a Lemmatizer/Tagger for Basque | <|reference_start|>Different Issues in the Design of a Lemmatizer/Tagger for Basque: This paper presents relevant issues that have been considered in the design of a general purpose lemmatizer/tagger for Basque (EUSLEM). The lemmatizer/tagger is conceived as a basic tool necessary for other linguistic applications. It uses the lexical data base and the morphological analyzer previously developed and implemented. Due to the characteristics of the language, the tagset here proposed in structured in for levels, so that each level is a refinement of the previous one in the sense that it adds more detailed information. We will focus on the problems found in designing this tagset and on the strategies for morphological disambiguation that will be used.<|reference_end|> | arxiv | @article{aduriz1995different,
title={Different Issues in the Design of a Lemmatizer/Tagger for Basque},
author={I. Aduriz, I. Alegria, J. M. Arriola, X. Artola, Diaz de Illarraza A.,
N. Ezeiza, K. Gojenola, M. Maritxalar},
journal={arXiv preprint arXiv:cmp-lg/9503020},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503020},
primaryClass={cmp-lg cs.CL}
} | aduriz1995different |
arxiv-668573 | cmp-lg/9503021 | A Note on the Complexity of Restricted Attribute-Value Grammars | <|reference_start|>A Note on the Complexity of Restricted Attribute-Value Grammars: The recognition problem for attribute-value grammars (AVGs) was shown to be undecidable by Johnson in 1988. Therefore, the general form of AVGs is of no practical use. In this paper we study a very restricted form of AVG, for which the recognition problem is decidable (though still NP-complete), the R-AVG. We show that the R-AVG formalism captures all of the context free languages and more, and introduce a variation on the so-called `off-line parsability constraint', the `honest parsability constraint', which lets different types of R-AVG coincide precisely with well-known time complexity classes.<|reference_end|> | arxiv | @article{torenvliet1995a,
title={A Note on the Complexity of Restricted Attribute-Value Grammars},
author={Leen Torenvliet and Marten Trautwein (University of Amsterdam)},
journal={arXiv preprint arXiv:cmp-lg/9503021},
year={1995},
number={CT-95-02},
archivePrefix={arXiv},
eprint={cmp-lg/9503021},
primaryClass={cmp-lg cs.CL}
} | torenvliet1995a |
arxiv-668574 | cmp-lg/9503022 | Assessing Complexity Results in Feature Theories | <|reference_start|>Assessing Complexity Results in Feature Theories: In this paper, we assess the complexity results of formalisms that describe the feature theories used in computational linguistics. We show that from these complexity results no immediate conclusions can be drawn about the complexity of the recognition problem of unification grammars using these feature theories. On the one hand, the complexity of feature theories does not provide an upper bound for the complexity of such unification grammars. On the other hand, the complexity of feature theories need not provide a lower bound. Therefore, we argue for formalisms that describe actual unification grammars instead of feature theories. Thus the complexity results of these formalisms judge upon the hardness of unification grammars in computational linguistics.<|reference_end|> | arxiv | @article{trautwein1995assessing,
title={Assessing Complexity Results in Feature Theories},
author={Marten Trautwein (University of Amsterdam)},
journal={arXiv preprint arXiv:cmp-lg/9503022},
year={1995},
number={LP-95-01},
archivePrefix={arXiv},
eprint={cmp-lg/9503022},
primaryClass={cmp-lg cs.CL}
} | trautwein1995assessing |
arxiv-668575 | cmp-lg/9503023 | A fast partial parse of natural language sentences using a connectionist method | <|reference_start|>A fast partial parse of natural language sentences using a connectionist method: The pattern matching capabilities of neural networks can be used to locate syntactic constituents of natural language. This paper describes a fully automated hybrid system, using neural nets operating within a grammatic framework. It addresses the representation of language for connectionist processing, and describes methods of constraining the problem size. The function of the network is briefly explained, and results are given.<|reference_end|> | arxiv | @article{lyon1995a,
title={A fast partial parse of natural language sentences using a connectionist
method},
author={Caroline Lyon and Bob Dickerson (University of Hertfordshire)},
journal={arXiv preprint arXiv:cmp-lg/9503023},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503023},
primaryClass={cmp-lg cs.CL}
} | lyon1995a |
arxiv-668576 | cmp-lg/9503024 | From compositional to systematic semantics | <|reference_start|>From compositional to systematic semantics: We prove a theorem stating that any semantics can be encoded as a compositional semantics, which means that, essentially, the standard definition of compositionality is formally vacuous. We then show that when compositional semantics is required to be "systematic" (that is, the meaning function cannot be arbitrary, but must belong to some class), it is possible to distinguish between compositional and non-compositional semantics. As a result, we believe that the paper clarifies the concept of compositionality and opens a possibility of making systematic formal comparisons of different systems of grammars.<|reference_end|> | arxiv | @article{zadrozny1995from,
title={From compositional to systematic semantics},
author={Wlodek Zadrozny},
journal={Linguistics and Philosophy(17):329-342, 1994},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9503024},
primaryClass={cmp-lg cs.CL}
} | zadrozny1995from |
arxiv-668577 | cmp-lg/9503025 | Co-occurrence Vectors from Corpora vs Distance Vectors from Dictionaries | <|reference_start|>Co-occurrence Vectors from Corpora vs Distance Vectors from Dictionaries: A comparison was made of vectors derived by using ordinary co-occurrence statistics from large text corpora and of vectors derived by measuring the inter-word distances in dictionary definitions. The precision of word sense disambiguation by using co-occurrence vectors from the 1987 Wall Street Journal (20M total words) was higher than that by using distance vectors from the Collins English Dictionary (60K head words + 1.6M definition words). However, other experimental results suggest that distance vectors contain some different semantic information from co-occurrence vectors.<|reference_end|> | arxiv | @article{niwa1995co-occurrence,
title={Co-occurrence Vectors from Corpora vs. Distance Vectors from
Dictionaries},
author={Yoshiki Niwa and Yoshihiko Nitta (ARL, Hitachi, Ltd.)},
journal={COLING94, 304-309.},
year={1995},
number={ARL Research Report No. 94-003},
archivePrefix={arXiv},
eprint={cmp-lg/9503025},
primaryClass={cmp-lg cs.CL}
} | niwa1995co-occurrence |
arxiv-668578 | cmp-lg/9504001 | Automatic processing proper names in texts | <|reference_start|>Automatic processing proper names in texts: This paper shows first the problems raised by proper names in natural language processing. Second, it introduces the knowledge representation structure we use based on conceptual graphs. Then it explains the techniques which are used to process known and unknown proper names. At last, it gives the performance of the system and the further works we intend to deal with.<|reference_end|> | arxiv | @article{wolinski1995automatic,
title={Automatic processing proper names in texts},
author={Francis Wolinski, Frantz Vichot, Bruno Dillet (Informatique CDC)},
journal={arXiv preprint arXiv:cmp-lg/9504001},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504001},
primaryClass={cmp-lg cs.CL}
} | wolinski1995automatic |
arxiv-668579 | cmp-lg/9504002 | Tagset Design and Inflected Languages | <|reference_start|>Tagset Design and Inflected Languages: An experiment designed to explore the relationship between tagging accuracy and the nature of the tagset is described, using corpora in English, French and Swedish. In particular, the question of internal versus external criteria for tagset design is considered, with the general conclusion that external (linguistic) criteria should be followed. Some problems associated with tagging unknown words in inflected languages are briefly considered.<|reference_end|> | arxiv | @article{elworthy1995tagset,
title={Tagset Design and Inflected Languages},
author={David Elworthy},
journal={arXiv preprint arXiv:cmp-lg/9504002},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504002},
primaryClass={cmp-lg cs.CL}
} | elworthy1995tagset |
arxiv-668580 | cmp-lg/9504003 | Collaborating on Referring Expressions | <|reference_start|>Collaborating on Referring Expressions: This paper presents a computational model of how conversational participants collaborate in order to make a referring action successful. The model is based on the view of language as goal-directed behavior. We propose that the content of a referring expression can be accounted for by the planning paradigm. Not only does this approach allow the processes of building referring expressions and identifying their referents to be captured by plan construction and plan inference, it also allows us to account for how participants clarify a referring expression by using meta-actions that reason about and manipulate the plan derivation that corresponds to the referring expression. To account for how clarification goals arise and how inferred clarification plans affect the agent, we propose that the agents are in a certain state of mind, and that this state includes an intention to achieve the goal of referring and a plan that the agents are currently considering. It is this mental state that sanctions the adoption of goals and the acceptance of inferred plans, and so acts as a link between understanding and generation.<|reference_end|> | arxiv | @article{heeman1995collaborating,
title={Collaborating on Referring Expressions},
author={Peter A. Heeman (University of Rochester) and Graeme Hirst (University
of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9504003},
year={1995},
number={TR 435},
archivePrefix={arXiv},
eprint={cmp-lg/9504003},
primaryClass={cmp-lg cs.CL}
} | heeman1995collaborating |
arxiv-668581 | cmp-lg/9504004 | A Computational Treatment of HPSG Lexical Rules as Covariation in Lexical Entries | <|reference_start|>A Computational Treatment of HPSG Lexical Rules as Covariation in Lexical Entries: We describe a compiler which translates a set of HPSG lexical rules and their interaction into definite relations used to constrain lexical entries. The compiler ensures automatic transfer of properties unchanged by a lexical rule. Thus an operational semantics for the full lexical rule mechanism as used in HPSG linguistics is provided. Program transformation techniques are used to advance the resulting encoding. The final output constitutes a computational counterpart of the linguistic generalizations captured by lexical rules and allows ``on the fly'' application.<|reference_end|> | arxiv | @article{meurers1995a,
title={A Computational Treatment of HPSG Lexical Rules as Covariation in
Lexical Entries},
author={Walt Detmar Meurers and Guido Minnen (SFB 340, Univ. T"ubingen)},
journal={arXiv preprint arXiv:cmp-lg/9504004},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504004},
primaryClass={cmp-lg cs.CL}
} | meurers1995a |
arxiv-668582 | cmp-lg/9504005 | Constraint Logic Programming for Natural Language Processing | <|reference_start|>Constraint Logic Programming for Natural Language Processing: This paper proposes an evaluation of the adequacy of the constraint logic programming paradigm for natural language processing. Theoretical aspects of this question have been discussed in several works. We adopt here a pragmatic point of view and our argumentation relies on concrete solutions. Using actual contraints (in the CLP sense) is neither easy nor direct. However, CLP can improve parsing techniques in several aspects such as concision, control, efficiency or direct representation of linguistic formalism. This discussion is illustrated by several examples and the presentation of an HPSG parser.<|reference_end|> | arxiv | @article{blache1995constraint,
title={Constraint Logic Programming for Natural Language Processing},
author={Philippe Blache (2LC-CNRS) and Nabil Hathout (INaLF-CNRS)},
journal={arXiv preprint arXiv:cmp-lg/9504005},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504005},
primaryClass={cmp-lg cs.CL}
} | blache1995constraint |
arxiv-668583 | cmp-lg/9504006 | Cues and control in Expert-Client Dialogues | <|reference_start|>Cues and control in Expert-Client Dialogues: We conducted an empirical analysis into the relation between control and discourse structure. We applied control criteria to four dialogues and identified 3 levels of discourse structure. We investigated the mechanism for changing control between these structures and found that utterance type and not cue words predicted shifts of control. Participants used certain types of signals when discourse goals were proceeding successfully but resorted to interruptions when they were not.<|reference_end|> | arxiv | @article{whittaker1995cues,
title={Cues and control in Expert-Client Dialogues},
author={Steve Whittaker (Hewlett Packard Laboratories), Phil Stenton (Hewlett
Packard Laboratories)},
journal={Proceedings of the 26th Annual Meeting of the Association of
Computational Linguistics, 1988},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504006},
primaryClass={cmp-lg cs.CL}
} | whittaker1995cues |
arxiv-668584 | cmp-lg/9504007 | Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation | <|reference_start|>Mixed Initiative in Dialogue: An Investigation into Discourse Segmentation: Conversation between two people is usually of mixed-initiative, with control over the conversation being transferred from one person to another. We apply a set of rules for the transfer of control to 4 sets of dialogues consisting of a total of 1862 turns. The application of the control rules lets us derive domain-independent discourse structures. The derived structures indicate that initiative plays a role in the structuring of discourse. In order to explore the relationship of control and initiative to discourse processes like centering, we analyze the distribution of four different classes of anaphora for two data sets. This distribution indicates that some control segments are hierarchically related to others. The analysis suggests that discourse participants often mutually agree to a change of topic. We also compared initiative in Task Oriented and Advice Giving dialogues and found that both allocation of control and the manner in which control is transferred is radically different for the two dialogue types. These differences can be explained in terms of collaborative planning principles.<|reference_end|> | arxiv | @article{walker1995mixed,
title={Mixed Initiative in Dialogue: An Investigation into Discourse
Segmentation},
author={Marilyn Walker (University of Pennsylvania), Steve Whittaker (Hewlett
Packard Laboratories)},
journal={Proceedings of the 28th Annual Meeting of the Association of
Computational Linguistics, 1990},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504007},
primaryClass={cmp-lg cs.CL}
} | walker1995mixed |
arxiv-668585 | cmp-lg/9504008 | SKOPE: A connectionist/symbolic architecture of spoken Korean processing | <|reference_start|>SKOPE: A connectionist/symbolic architecture of spoken Korean processing: Spoken language processing requires speech and natural language integration. Moreover, spoken Korean calls for unique processing methodology due to its linguistic characteristics. This paper presents SKOPE, a connectionist/symbolic spoken Korean processing engine, which emphasizes that: 1) connectionist and symbolic techniques must be selectively applied according to their relative strength and weakness, and 2) the linguistic characteristics of Korean must be fully considered for phoneme recognition, speech and language integration, and morphological/syntactic processing. The design and implementation of SKOPE demonstrates how connectionist/symbolic hybrid architectures can be constructed for spoken agglutinative language processing. Also SKOPE presents many novel ideas for speech and language processing. The phoneme recognition, morphological analysis, and syntactic analysis experiments show that SKOPE is a viable approach for the spoken Korean processing.<|reference_end|> | arxiv | @article{lee1995skope:,
title={SKOPE: A connectionist/symbolic architecture of spoken Korean processing},
author={Geunbae Lee and Jong-Hyeok Lee (Department of Computer Science &
Engineering and Postech Information Research Laboratory Pohang University of
Science & Technology, Hoja-Dong, Pohang, Korea)},
journal={arXiv preprint arXiv:cmp-lg/9504008},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504008},
primaryClass={cmp-lg cs.CL}
} | lee1995skope: |
arxiv-668586 | cmp-lg/9504009 | Abstract Machine for Typed Feature Structures | <|reference_start|>Abstract Machine for Typed Feature Structures: This paper describes an abstract machine for linguistic formalisms that are based on typed feature structures, such as HPSG. The core design of the abstract machine is given in detail, including the compilation process from a high-level language to the abstract machine language and the implementation of the abstract instructions. The machine's engine supports the unification of typed, possibly cyclic, feature structures. A separate module deals with control structures and instructions to accommodate parsing for phrase structure grammars. We treat the linguistic formalism as a high-level declarative programming language, applying methods that were proved useful in computer science to the study of natural languages: a grammar specified using the formalism is endowed with an operational semantics.<|reference_end|> | arxiv | @article{wintner1995abstract,
title={Abstract Machine for Typed Feature Structures},
author={Shuly Wintner and Nissim Francez (Computer Science, Technion, Israel
Institute of Techniology, Haifa 32000, Israel)},
journal={Proc. 5th Int. Workshop on Natural Language Understanding and
Logic Programming, Lisbon, May 1995},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504009},
primaryClass={cmp-lg cs.CL}
} | wintner1995abstract |
arxiv-668587 | cmp-lg/9504010 | MAXIMUM LIKELIHOOD AND MINIMUM ENTROPY IDENTIFICATION OF GRAMMARS | <|reference_start|>MAXIMUM LIKELIHOOD AND MINIMUM ENTROPY IDENTIFICATION OF GRAMMARS: Using the Thermodynamic Formalism, we introduce a Gibbsian model for the identification of regular grammars based only on positive evidence. This model mimics the natural language acquisition procedure driven by prosody which is here represented by the thermodynamical potential. The statistical question we face is how to estimate the incidenc e matrix of a subshift of finite type from a sample produced by a Gibbs state whose potential is known. The model acquaints for both the robustness of t he language acquisition procedure and language changes. The probabilistic appr oach we use avoids invoking ad-hoc restrictions as Berwick's Subset Principle.<|reference_end|> | arxiv | @article{collet1995maximum,
title={MAXIMUM LIKELIHOOD AND MINIMUM ENTROPY IDENTIFICATION OF GRAMMARS},
author={P.Collet (CNRS), A.Galves (USP), A.Lopes (UFRGS)},
journal={arXiv preprint arXiv:cmp-lg/9504010},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504010},
primaryClass={cmp-lg cs.CL}
} | collet1995maximum |
arxiv-668588 | cmp-lg/9504011 | A Processing Model for Free Word Order Languages | <|reference_start|>A Processing Model for Free Word Order Languages: Like many verb-final languages, Germn displays considerable word-order freedom: there is no syntactic constraint on the ordering of the nominal arguments of a verb, as long as the verb remains in final position. This effect is referred to as ``scrambling'', and is interpreted in transformational frameworks as leftward movement of the arguments. Furthermore, arguments from an embedded clause may move out of their clause; this effect is referred to as ``long-distance scrambling''. While scrambling has recently received considerable attention in the syntactic literature, the status of long-distance scrambling has only rarely been addressed. The reason for this is the problematic status of the data: not only is long-distance scrambling highly dependent on pragmatic context, it also is strongly subject to degradation due to processing constraints. As in the case of center-embedding, it is not immediately clear whether to assume that observed unacceptability of highly complex sentences is due to grammatical restrictions, or whether we should assume that the competence grammar does not place any restrictions on scrambling (and that, therefore, all such sentences are in fact grammatical), and the unacceptability of some (or most) of the grammatically possible word orders is due to processing limitations. In this paper, we will argue for the second view by presenting a processing model for German.<|reference_end|> | arxiv | @article{rambow1995a,
title={A Processing Model for Free Word Order Languages},
author={Owen Rambow and Aravind K. Joshi},
journal={arXiv preprint arXiv:cmp-lg/9504011},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504011},
primaryClass={cmp-lg cs.CL}
} | rambow1995a |
arxiv-668589 | cmp-lg/9504012 | Linear Logic for Meaning Assembly | <|reference_start|>Linear Logic for Meaning Assembly: Semantic theories of natural language associate meanings with utterances by providing meanings for lexical items and rules for determining the meaning of larger units given the meanings of their parts. Meanings are often assumed to combine via function application, which works well when constituent structure trees are used to guide semantic composition. However, we believe that the functional structure of Lexical-Functional Grammar is best used to provide the syntactic information necessary for constraining derivations of meaning in a cross-linguistically uniform format. It has been difficult, however, to reconcile this approach with the combination of meanings by function application. In contrast to compositional approaches, we present a deductive approach to assembling meanings, based on reasoning with constraints, which meshes well with the unordered nature of information in the functional structure. Our use of linear logic as a `glue' for assembling meanings allows for a coherent treatment of the LFG requirements of completeness and coherence as well as of modification and quantification.<|reference_end|> | arxiv | @article{dalrymple1995linear,
title={Linear Logic for Meaning Assembly},
author={Mary Dalrymple, John Lamping, Fernando Pereira, and Vijay Saraswat},
journal={Proceedings of CLNLP, Edinburgh, April 1995},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504012},
primaryClass={cmp-lg cs.CL}
} | dalrymple1995linear |
arxiv-668590 | cmp-lg/9504013 | NLG vs Templates | <|reference_start|>NLG vs Templates: One of the most important questions in applied NLG is what benefits (or `value-added', in business-speak) NLG technology offers over template-based approaches. Despite the importance of this question to the applied NLG community, however, it has not been discussed much in the research NLG community, which I think is a pity. In this paper, I try to summarize the issues involved and recap current thinking on this topic. My goal is not to answer this question (I don't think we know enough to be able to do so), but rather to increase the visibility of this issue in the research community, in the hope of getting some input and ideas on this very important question. I conclude with a list of specific research areas I would like to see more work in, because I think they would increase the `value-added' of NLG over templates.<|reference_end|> | arxiv | @article{reiter1995nlg,
title={NLG vs. Templates},
author={Ehud Reiter (CoGenTex, Ithaca, USA)},
journal={arXiv preprint arXiv:cmp-lg/9504013},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504013},
primaryClass={cmp-lg cs.CL}
} | reiter1995nlg |
arxiv-668591 | cmp-lg/9504014 | LexGram - a practical categorial grammar formalism - | <|reference_start|>LexGram - a practical categorial grammar formalism -: We present the LexGram system, an amalgam of (Lambek) categorial grammar and Head Driven Phrase Structure Grammar (HPSG), and show that the grammar formalism it implements is a well-structured and useful tool for actual grammar development.<|reference_end|> | arxiv | @article{koenig1995lexgram,
title={LexGram - a practical categorial grammar formalism -},
author={Esther Koenig (University of Stuttgart)},
journal={Proc.CLNLP95, Edinburgh},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504014},
primaryClass={cmp-lg cs.CL}
} | koenig1995lexgram |
arxiv-668592 | cmp-lg/9504015 | Estimating Lexical Priors for Low-Frequency Syncretic Forms | <|reference_start|>Estimating Lexical Priors for Low-Frequency Syncretic Forms: Given a previously unseen form that is morphologically n-ways ambiguous, what is the best estimator for the lexical prior probabilities for the various functions of the form? We argue that the best estimator is provided by computing the relative frequencies of the various functions among the hapax legomena --- the forms that occur exactly once in a corpus. This result has important implications for the development of stochastic morphological taggers, especially when some initial hand-tagging of a corpus is required: For predicting lexical priors for very low-frequency morphologically ambiguous types (most of which would not occur in any given corpus) one should concentrate on tagging a good representative sample of the hapax legomena, rather than extensively tagging words of all frequency ranges.<|reference_end|> | arxiv | @article{baayen1995estimating,
title={Estimating Lexical Priors for Low-Frequency Syncretic Forms},
author={Harald Baayen (Max Planck Institute for Psycholinguistics), Richard
Sproat (AT&T Bell Laboratories)},
journal={arXiv preprint arXiv:cmp-lg/9504015},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504015},
primaryClass={cmp-lg cs.CL}
} | baayen1995estimating |
arxiv-668593 | cmp-lg/9504016 | Memoization of Top Down Parsing | <|reference_start|>Memoization of Top Down Parsing: This paper discusses the relationship between memoized top-down recognizers and chart parsers. It presents a version of memoization suitable for continuation-passing style programs. When applied to a simple formalization of a top-down recognizer it yields a terminating parser.<|reference_end|> | arxiv | @article{johnson1995memoization,
title={Memoization of Top Down Parsing},
author={Mark Johnson (Brown University)},
journal={arXiv preprint arXiv:cmp-lg/9504016},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504016},
primaryClass={cmp-lg cs.CL}
} | johnson1995memoization |
arxiv-668594 | cmp-lg/9504017 | A Uniform Treatment of Pragmatic Inferences in Simple and Complex Utterances and Sequences of Utterances | <|reference_start|>A Uniform Treatment of Pragmatic Inferences in Simple and Complex Utterances and Sequences of Utterances: Drawing appropriate defeasible inferences has been proven to be one of the most pervasive puzzles of natural language processing and a recurrent problem in pragmatics. This paper provides a theoretical framework, called ``stratified logic'', that can accommodate defeasible pragmatic inferences. The framework yields an algorithm that computes the conversational, conventional, scalar, clausal, and normal state implicatures; and the presuppositions that are associated with utterances. The algorithm applies equally to simple and complex utterances and sequences of utterances.<|reference_end|> | arxiv | @article{marcu1995a,
title={A Uniform Treatment of Pragmatic Inferences in Simple and Complex
Utterances and Sequences of Utterances},
author={Daniel Marcu and Graeme Hirst (University of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9504017},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504017},
primaryClass={cmp-lg cs.CL}
} | marcu1995a |
arxiv-668595 | cmp-lg/9504018 | An Implemented Formalism for Computing Linguistic Presuppositions and Existential Commitments | <|reference_start|>An Implemented Formalism for Computing Linguistic Presuppositions and Existential Commitments: We rely on the strength of linguistic and philosophical perspectives in constructing a framework that offers a unified explanation for presuppositions and existential commitment. We use a rich ontology and a set of methodological principles that embed the essence of Meinong's philosophy and Grice's conversational principles into a stratified logic, under an unrestricted interpretation of the quantifiers. The result is a logical formalism that yields a tractable computational method that uniformly calculates all the presuppositions of a given utterance, including the existential ones.<|reference_end|> | arxiv | @article{marcu1995an,
title={An Implemented Formalism for Computing Linguistic Presuppositions and
Existential Commitments},
author={Daniel Marcu and Graeme Hirst (University of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9504018},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504018},
primaryClass={cmp-lg cs.CL}
} | marcu1995an |
arxiv-668596 | cmp-lg/9504019 | A Formalism and an Algorithm for Computing Pragmatic Inferences and Detecting Infelicities | <|reference_start|>A Formalism and an Algorithm for Computing Pragmatic Inferences and Detecting Infelicities: Since Austin introduced the term ``infelicity'', the linguistic literature has been flooded with its use, but no formal or computational explanation has been given for it. This thesis provides one for those infelicities that occur when a pragmatic inference is cancelled. Our contribution assumes the existence of a finer grained taxonomy with respect to pragmatic inferences. It is shown that if one wants to account for the natural language expressiveness, one should distinguish between pragmatic inferences that are felicitous to defeat and pragmatic inferences that are infelicitously defeasible. Thus, it is shown that one should consider at least three types of information: indefeasible, felicitously defeasible, and infelicitously defeasible. The cancellation of the last of these determines the pragmatic infelicities. A new formalism has been devised to accommodate the three levels of information, called ``stratified logic''. Within it, we are able to express formally notions such as ``utterance U presupposes P'' or ``utterance U is infelicitous''. Special attention is paid to the implications that our work has in solving some well-known existential philosophical puzzles. The formalism yields an algorithm for computing interpretations for utterances, for determining their associated presuppositions, and for signalling infelicitous utterances that has been implemented in Common Lisp. The algorithm applies equally to simple and complex utterances and sequences of utterances.<|reference_end|> | arxiv | @article{marcu1995a,
title={A Formalism and an Algorithm for Computing Pragmatic Inferences and
Detecting Infelicities},
author={Daniel Marcu (University of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9504019},
year={1995},
number={Technical Report CSRI-309, Computer Systems Research Institute,
University of Toronto, October 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9504019},
primaryClass={cmp-lg cs.CL}
} | marcu1995a |
arxiv-668597 | cmp-lg/9504020 | Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions | <|reference_start|>Computational Interpretations of the Gricean Maxims in the Generation of Referring Expressions: We examine the problem of generating definite noun phrases that are appropriate referring expressions; i.e, noun phrases that (1) successfully identify the intended referent to the hearer whilst (2) not conveying to her any false conversational implicatures (Grice, 1975). We review several possible computational interpretations of the conversational implicature maxims, with different computational costs, and argue that the simplest may be the best, because it seems to be closest to what human speakers do. We describe our recommended algorithm in detail, along with a specification of the resources a host system must provide in order to make use of the algorithm, and an implementation used in the natural language generation component of the IDAS system. This paper will appear in the the April--June 1995 issue of Cognitive Science, and is made available on cmp-lg with the permission of Ablex, the publishers of that journal.<|reference_end|> | arxiv | @article{dale1995computational,
title={Computational Interpretations of the Gricean Maxims in the Generation of
Referring Expressions},
author={Robert Dale (Microsoft, Sydney) and Ehud Reiter (CoGenTeX, Ithaca)},
journal={arXiv preprint arXiv:cmp-lg/9504020},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504020},
primaryClass={cmp-lg cs.CL}
} | dale1995computational |
arxiv-668598 | cmp-lg/9504021 | Phonological Derivation in Optimality Theory | <|reference_start|>Phonological Derivation in Optimality Theory: Optimality Theory is a constraint-based theory of phonology which allows constraints to be violated. Consequently, implementing the theory presents problems for declarative constraint-based processing frameworks. On the basis of two regularity assumptions, that candidate sets are regular and that constraints can be modelled by transducers, this paper presents and proves correct algorithms for computing the action of constraints, and hence deriving surface forms.<|reference_end|> | arxiv | @article{ellison1995phonological,
title={Phonological Derivation in Optimality Theory},
author={T. Mark Ellison (now INESC, Lisbon)},
journal={Coling 94:1007-1013 (Vol II)},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504021},
primaryClass={cmp-lg cs.CL}
} | ellison1995phonological |
arxiv-668599 | cmp-lg/9504022 | Constraints, Exceptions and Representations | <|reference_start|>Constraints, Exceptions and Representations: This paper shows that default-based phonologies have the potential to capture morphophonological generalisations which cannot be captured by non-defaul theories. In achieving this result, I offer a characterisation of Underspecification Theory and Optimality Theory in terms of their methods for ordering defaults. The result means that machine learning techniques for building non-defualt analyses may not provide a suitable basis for morphophonological analysis.<|reference_end|> | arxiv | @article{ellison1995constraints,,
title={Constraints, Exceptions and Representations},
author={T. Mark Ellison (currently INESC, Lisbon)},
journal={(Proc of ACL SIGPHON First Meeting)},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504022},
primaryClass={cmp-lg cs.CL}
} | ellison1995constraints, |
arxiv-668600 | cmp-lg/9504023 | TAKTAG: Two-phase learning method for hybrid statistical/rule-based part-of-speech disambiguation | <|reference_start|>TAKTAG: Two-phase learning method for hybrid statistical/rule-based part-of-speech disambiguation: Both statistical and rule-based approaches to part-of-speech (POS) disambiguation have their own advantages and limitations. Especially for Korean, the narrow windows provided by hidden markov model (HMM) cannot cover the necessary lexical and long-distance dependencies for POS disambiguation. On the other hand, the rule-based approaches are not accurate and flexible to new tag-sets and languages. In this regard, the statistical/rule-based hybrid method that can take advantages of both approaches is called for the robust and flexible POS disambiguation. We present one of such method, that is, a two-phase learning architecture for the hybrid statistical/rule-based POS disambiguation, especially for Korean. In this method, the statistical learning of morphological tagging is error-corrected by the rule-based learning of Brill [1992] style tagger. We also design the hierarchical and flexible Korean tag-set to cope with the multiple tagging applications, each of which requires different tag-set. Our experiments show that the two-phase learning method can overcome the undesirable features of solely HMM-based or solely rule-based tagging, especially for morphologically complex Korean.<|reference_end|> | arxiv | @article{lee1995taktag:,
title={TAKTAG: Two-phase learning method for hybrid statistical/rule-based
part-of-speech disambiguation},
author={Geunbae Lee, Jong-Hyeok Lee, Sanghyun Shin (Department of Computer
Science & Engineering and Postech Information Research Laboratory, Pohang
University of Science & Technology)},
journal={arXiv preprint arXiv:cmp-lg/9504023},
year={1995},
archivePrefix={arXiv},
eprint={cmp-lg/9504023},
primaryClass={cmp-lg cs.CL}
} | lee1995taktag: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.