corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-668401 | cmp-lg/9407030 | Computing FIRST and FOLLOW Functions for Feature-Theoretic Grammars | <|reference_start|>Computing FIRST and FOLLOW Functions for Feature-Theoretic Grammars: This paper describes an algorithm for the computation of FIRST and FOLLOW sets for use with feature-theoretic grammars in which the value of the sets consists of pairs of feature-theoretic categories. The algorithm preserves as much information from the grammars as possible, using negative restriction to define equivalence classes. Addition of a simple data structure leads to an order of magnitude improvement in execution time over a naive implementation.<|reference_end|> | arxiv | @article{trujillo1994computing,
title={Computing FIRST and FOLLOW Functions for Feature-Theoretic Grammars},
author={Arturo Trujillo (Computer Laboratory, University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9407030},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407030},
primaryClass={cmp-lg cs.CL}
} | trujillo1994computing |
arxiv-668402 | cmp-lg/9408001 | The Correct and Efficient Implementation of Appropriateness Specifications for Typed Feature Structures | <|reference_start|>The Correct and Efficient Implementation of Appropriateness Specifications for Typed Feature Structures: In this paper, we argue that type inferencing incorrectly implements appropriateness specifications for typed feature structures, promote a combination of type resolution and unfilling as a correct and efficient alternative, and consider the expressive limits of this alternative approach. Throughout, we use feature cooccurence restrictions as illustration and linguistic motivation.<|reference_end|> | arxiv | @article{gerdemann1994the,
title={The Correct and Efficient Implementation of Appropriateness
Specifications for Typed Feature Structures},
author={Dale Gerdemann and Paul John King (University of Tuebingen, Germany)},
journal={COLING 94 paper},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408001},
primaryClass={cmp-lg cs.CL}
} | gerdemann1994the |
arxiv-668403 | cmp-lg/9408002 | Computational Analyses of Arabic Morphology | <|reference_start|>Computational Analyses of Arabic Morphology: This paper demonstrates how a (multi-tape) two-level formalism can be used to write two-level grammars for Arabic non-linear morphology using a high level, but computationally tractable, notation. Three illustrative grammars are provided based on CV-, moraic- and affixational analyses. These are complemented by a proposal for handling the hitherto computationally untreated problem of the broken plural. It will be shown that the best grammars for describing Arabic non-linear morphology are moraic in the case of templatic stems, and affixational in the case of a-templatic stems. The paper will demonstrate how the broken plural can be derived under two-level theory via the `implicit' derivation of the singular.<|reference_end|> | arxiv | @article{kiraz1994computational,
title={Computational Analyses of Arabic Morphology},
author={George A. Kiraz},
journal={arXiv preprint arXiv:cmp-lg/9408002},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408002},
primaryClass={cmp-lg cs.CL}
} | kiraz1994computational |
arxiv-668404 | cmp-lg/9408003 | Typed Feature Structures as Descriptions | <|reference_start|>Typed Feature Structures as Descriptions: A description is an entity that can be interpreted as true or false of an object, and using feature structures as descriptions accrues several computational benefits. In this paper, I create an explicit interpretation of a typed feature structure used as a description, define the notion of a satisfiable feature structure, and create a simple and effective algorithm to decide if a feature structure is satisfiable.<|reference_end|> | arxiv | @article{king1994typed,
title={Typed Feature Structures as Descriptions},
author={Paul John King (University of Tuebingen, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9408003},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408003},
primaryClass={cmp-lg cs.CL}
} | king1994typed |
arxiv-668405 | cmp-lg/9408004 | Parsing with Principles and Probabilities | <|reference_start|>Parsing with Principles and Probabilities: This paper is an attempt to bring together two approaches to language analysis. The possible use of probabilistic information in principle-based grammars and parsers is considered, including discussion on some theoretical and computational problems that arise. Finally a partial implementation of these ideas is presented, along with some preliminary results from testing on a small set of sentences.<|reference_end|> | arxiv | @article{fordham1994parsing,
title={Parsing with Principles and Probabilities},
author={Andrew Fordham (Dept. of Sociology, University of Surrey, UK), and
Matthew Crocker (Centre for Cognitive Science, University of Edinburgh, UK)},
journal={arXiv preprint arXiv:cmp-lg/9408004},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408004},
primaryClass={cmp-lg cs.CL}
} | fordham1994parsing |
arxiv-668406 | cmp-lg/9408005 | A Modular and Flexible Architecture for an Integrated Corpus Query System | <|reference_start|>A Modular and Flexible Architecture for an Integrated Corpus Query System: The paper describes the architecture of an integrated and extensible corpus query system developed at the University of Stuttgart and gives examples of some of the modules realized within this architecture. The modules form the core of a corpus workbench. Within the proposed architecture, information required for the evaluation of queries may be derived from different knowledge sources (the corpus text, databases, on-line thesauri) and by different means: either through direct lookup in a database or by calling external tools which may infer the necessary information at the time of query evaluation. The information available and the method of information access can be stated declaratively and individually for each corpus, leading to a flexible, extensible and modular corpus workbench.<|reference_end|> | arxiv | @article{christ1994a,
title={A Modular and Flexible Architecture for an Integrated Corpus Query
System},
author={Oliver Christ (IMS Stuttgart, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9408005},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408005},
primaryClass={cmp-lg cs.CL}
} | christ1994a |
arxiv-668407 | cmp-lg/9408006 | LHIP: Extended DCGs for Configurable Robust Parsing | <|reference_start|>LHIP: Extended DCGs for Configurable Robust Parsing: We present LHIP, a system for incremental grammar development using an extended DCG formalism. The system uses a robust island-based parsing method controlled by user-defined performance thresholds.<|reference_end|> | arxiv | @article{ballim1994lhip:,
title={LHIP: Extended DCGs for Configurable Robust Parsing},
author={Afzal Ballim and Graham Russell (ISSCO, Geneva)},
journal={Proc. Coling 1994, vol.1 pp.501-507},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408006},
primaryClass={cmp-lg cs.CL}
} | ballim1994lhip: |
arxiv-668408 | cmp-lg/9408007 | Emergent Linguistic Rules from Inducing Decision Trees: Disambiguating Discourse Clue Words | <|reference_start|>Emergent Linguistic Rules from Inducing Decision Trees: Disambiguating Discourse Clue Words: We apply decision tree induction to the problem of discourse clue word sense disambiguation with a genetic algorithm. The automatic partitioning of the training set which is intrinsic to decision tree induction gives rise to linguistically viable rules.<|reference_end|> | arxiv | @article{siegel1994emergent,
title={Emergent Linguistic Rules from Inducing Decision Trees: Disambiguating
Discourse Clue Words},
author={Eric V. Siegel and Kathleen R. McKeown (Columbia University)},
journal={AAAI94 proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408007},
primaryClass={cmp-lg cs.CL}
} | siegel1994emergent |
arxiv-668409 | cmp-lg/9408008 | Statistical versus symbolic parsing for captioned-information retrieval | <|reference_start|>Statistical versus symbolic parsing for captioned-information retrieval: We discuss implementation issues of MARIE-1, a mostly symbolic parser fully implemented, and MARIE-2, a more statistical parser partially implemented. They address a corpus of 100,000 picture captions. We argue that the mixed approach of MARIE-2 should be better for this corpus because its algorithms (not data) are simpler.<|reference_end|> | arxiv | @article{rowe1994statistical,
title={Statistical versus symbolic parsing for captioned-information retrieval},
author={Neil C. Rowe (Code CS/Rp, Department of Computer Science, Naval
Postgraduate School, Monterey)},
journal={arXiv preprint arXiv:cmp-lg/9408008},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408008},
primaryClass={cmp-lg cs.CL}
} | rowe1994statistical |
arxiv-668410 | cmp-lg/9408009 | Tagging accurately -- Don't guess if you know | <|reference_start|>Tagging accurately -- Don't guess if you know: We discuss combining knowledge-based (or rule-based) and statistical part-of-speech taggers. We use two mature taggers, ENGCG and Xerox Tagger, to independently tag the same text and combine the results to produce a fully disambiguated text. In a 27000 word test sample taken from a previously unseen corpus we achieve 98.5% accuracy. This paper presents the data in detail. We describe the problems we encountered in the course of combining the two taggers and discuss the problem of evaluating taggers.<|reference_end|> | arxiv | @article{tapanainen1994tagging,
title={Tagging accurately -- Don't guess if you know},
author={Pasi Tapanainen (Rank Xerox Research Centre, Grenoble Laboratory),
Atro Voutilainen (Research Unit for Computational Linguistics, University of
Helsinki)},
journal={arXiv preprint arXiv:cmp-lg/9408009},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408009},
primaryClass={cmp-lg cs.CL}
} | tapanainen1994tagging |
arxiv-668411 | cmp-lg/9408010 | On Using Selectional Restriction in Language Models for Speech Recognition | <|reference_start|>On Using Selectional Restriction in Language Models for Speech Recognition: In this paper, we investigate the use of selectional restriction -- the constraints a predicate imposes on its arguments -- in a language model for speech recognition. We use an un-tagged corpus, followed by a public domain tagger and a very simple finite state machine to obtain verb-object pairs from unrestricted English text. We then measure the impact the knowledge of the verb has on the prediction of the direct object in terms of the perplexity of a cluster-based language model. The results show that even though a clustered bigram is more useful than a verb-object model, the combination of the two leads to an improvement over the clustered bigram model.<|reference_end|> | arxiv | @article{ueberla1994on,
title={On Using Selectional Restriction in Language Models for Speech
Recognition},
author={Joerg P. Ueberla (Simon Fraser University, Vancouver, Canada)},
journal={arXiv preprint arXiv:cmp-lg/9408010},
year={1994},
number={CMPT TR 94-03},
archivePrefix={arXiv},
eprint={cmp-lg/9408010},
primaryClass={cmp-lg cs.CL}
} | ueberla1994on |
arxiv-668412 | cmp-lg/9408011 | Distributional Clustering of English Words | <|reference_start|>Distributional Clustering of English Words: We describe and experimentally evaluate a method for automatically clustering words according to their distribution in particular syntactic contexts. Deterministic annealing is used to find lowest distortion sets of clusters. As the annealing parameter increases, existing clusters become unstable and subdivide, yielding a hierarchical ``soft'' clustering of the data. Clusters are used as the basis for class models of word coocurrence, and the models evaluated with respect to held-out test data.<|reference_end|> | arxiv | @article{pereira1994distributional,
title={Distributional Clustering of English Words},
author={Fernando Pereira (AT&T Bell Laboratories), Naftali Tishby (Hebrew
University), Lillian Lee (Harvard University)},
journal={arXiv preprint arXiv:cmp-lg/9408011},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408011},
primaryClass={cmp-lg cs.CL}
} | pereira1994distributional |
arxiv-668413 | cmp-lg/9408012 | Approximate N-Gram Markov Model for Natural Language Generation | <|reference_start|>Approximate N-Gram Markov Model for Natural Language Generation: This paper proposes an Approximate n-gram Markov Model for bag generation. Directed word association pairs with distances are used to approximate (n-1)-gram and n-gram training tables. This model has parameters of word association model, and merits of both word association model and Markov Model. The training knowledge for bag generation can be also applied to lexical selection in machine translation design.<|reference_end|> | arxiv | @article{chen1994approximate,
title={Approximate N-Gram Markov Model for Natural Language Generation},
author={Hsin-Hsi Chen (National Taiwan University) and Yue-Shi Lee (National
Taiwan University)},
journal={arXiv preprint arXiv:cmp-lg/9408012},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408012},
primaryClass={cmp-lg cs.CL}
} | chen1994approximate |
arxiv-668414 | cmp-lg/9408013 | Training and Scaling Preference Functions for Disambiguation | <|reference_start|>Training and Scaling Preference Functions for Disambiguation: We present an automatic method for weighting the contributions of preference functions used in disambiguation. Initial scaling factors are derived as the solution to a least-squares minimization problem, and improvements are then made by hill-climbing. The method is applied to disambiguating sentences in the ATIS (Air Travel Information System) corpus, and the performance of the resulting scaling factors is compared with hand-tuned factors. We then focus on one class of preference function, those based on semantic lexical collocations. Experimental results are presented showing that such functions vary considerably in selecting correct analyses. In particular we define a function that performs significantly better than ones based on mutual information and likelihood ratios of lexical associations.<|reference_end|> | arxiv | @article{alshawi1994training,
title={Training and Scaling Preference Functions for Disambiguation},
author={Hiyan Alshawi (AT&T Bell Laboratories), David Carter (SRI
International, Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9408013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408013},
primaryClass={cmp-lg cs.CL}
} | alshawi1994training |
arxiv-668415 | cmp-lg/9408014 | Qualitative and Quantitative Models of Speech Translation | <|reference_start|>Qualitative and Quantitative Models of Speech Translation: This paper compares a qualitative reasoning model of translation with a quantitative statistical model. We consider these models within the context of two hypothetical speech translation systems, starting with a logic-based design and pointing out which of its characteristics are best preserved or eliminated in moving to the second, quantitative design. The quantitative language and translation models are based on relations between lexical heads of phrases. Statistical parameters for structural dependency, lexical transfer, and linear order are used to select a set of implicit relations between words in a source utterance, a corresponding set of relations between target language words, and the most likely translation of the original utterance.<|reference_end|> | arxiv | @article{alshawi1994qualitative,
title={Qualitative and Quantitative Models of Speech Translation},
author={Hiyan Alshawi (AT&T Bell Laboratories)},
journal={arXiv preprint arXiv:cmp-lg/9408014},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408014},
primaryClass={cmp-lg cs.CL}
} | alshawi1994qualitative |
arxiv-668416 | cmp-lg/9408015 | Experimentally Evaluating Communicative Strategies: The Effect of the Task | <|reference_start|>Experimentally Evaluating Communicative Strategies: The Effect of the Task: Effective problem solving among multiple agents requires a better understanding of the role of communication in collaboration. In this paper we show that there are communicative strategies that greatly improve the performance of resource-bounded agents, but that these strategies are highly sensitive to the task requirements, situation parameters and agents' resource limitations. We base our argument on two sources of evidence: (1) an analysis of a corpus of 55 problem solving dialogues, and (2) experimental simulations of collaborative problem solving dialogues in an experimental world, Design-World, where we parameterize task requirements, agents' resources and communicative strategies.<|reference_end|> | arxiv | @article{walker1994experimentally,
title={Experimentally Evaluating Communicative Strategies: The Effect of the
Task},
author={Marilyn A. Walker (Mitsubishi Electric Research Laboratories,
Cambridge, Mass.)},
journal={Proceedings of AAAI 94, Seattle},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408015},
primaryClass={cmp-lg cs.CL}
} | walker1994experimentally |
arxiv-668417 | cmp-lg/9408016 | On Implementing an HPSG theory -- Aspects of the logical architecture, the formalization, and the implementation of head-driven phrase structure grammars | <|reference_start|>On Implementing an HPSG theory -- Aspects of the logical architecture, the formalization, and the implementation of head-driven phrase structure grammars: The paper presents some aspects involved in the formalization and implementation of HPSG theories. As basis, the logical setups of Carpenter (1992) and King (1989, 1994) are briefly compared regarding their usefulness as basis for HPSGII (Pollard and Sag 1994). The possibilities for expressing HPSG theories in the HPSGII architecture and in various computational systems (ALE, Troll, CUF, and TFS) are discussed. Beside a formal characterization of the possibilities, the paper investigates the specific choices for constraints with certain linguistic motivations, i.e. the lexicon, structure licencing, and grammatical principles. An ALE implementation of a theory for German proposed by Hinrichs and Nakazawa (1994) is used as example and the ALE grammar is included in the appendix.<|reference_end|> | arxiv | @article{meurers1994on,
title={On Implementing an HPSG theory -- Aspects of the logical architecture,
the formalization, and the implementation of head-driven phrase structure
grammars},
author={Walt Detmar Meurers (SFB 340/B4, University of Tuebingen, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9408016},
year={1994},
number={SFB report Nr.58},
archivePrefix={arXiv},
eprint={cmp-lg/9408016},
primaryClass={cmp-lg cs.CL}
} | meurers1994on |
arxiv-668418 | cmp-lg/9408017 | Reaping the Benefits of Interactive Syntax and Semantics | <|reference_start|>Reaping the Benefits of Interactive Syntax and Semantics: Semantic feedback is an important source of information that a parser could use to deal with local ambiguities in syntax. However, it is difficult to devise a systematic communication mechanism for interactive syntax and semantics. In this article, I propose a variant of left-corner parsing to define the points at which syntax and semantics should interact, an account of grammatical relations and thematic roles to define the content of the communication, and a conflict resolution strategy based on independent preferences from syntax and semantics. The resulting interactive model has been implemented in a program called COMPERE and shown to account for a wide variety of psycholinguistic data on structural and lexical ambiguities.<|reference_end|> | arxiv | @article{mahesh1994reaping,
title={Reaping the Benefits of Interactive Syntax and Semantics},
author={Kavi Mahesh (Georgia Institute of Technology)},
journal={appeared in ACL-94 Proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408017},
primaryClass={cmp-lg cs.CL}
} | mahesh1994reaping |
arxiv-668419 | cmp-lg/9408018 | Uniform Representations for Syntax-Semantics Arbitration | <|reference_start|>Uniform Representations for Syntax-Semantics Arbitration: Psychological investigations have led to considerable insight into the working of the human language comprehension system. In this article, we look at a set of principles derived from psychological findings to argue for a particular organization of linguistic knowledge along with a particular processing strategy and present a computational model of sentence processing based on those principles. Many studies have shown that human sentence comprehension is an incremental and interactive process in which semantic and other higher-level information interacts with syntactic information to make informed commitments as early as possible at a local ambiguity. Early commitments may be made by using top-down guidance from knowledge of different types, each of which must be applicable independently of others. Further evidence from studies of error recovery and delayed decisions points toward an arbitration mechanism for combining syntactic and semantic information in resolving ambiguities. In order to account for all of the above, we propose that all types of linguistic knowledge must be represented in a common form but must be separable so that they can be applied independently of each other and integrated at processing time by the arbitrator. We present such a uniform representation and a computational model called COMPERE based on the representation and the processing strategy.<|reference_end|> | arxiv | @article{mahesh1994uniform,
title={Uniform Representations for Syntax-Semantics Arbitration},
author={Kavi Mahesh and Kurt P. Eiselt (Georgia Institute of Technology)},
journal={appeared in Cogsci94 Proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408018},
primaryClass={cmp-lg cs.CL}
} | mahesh1994uniform |
arxiv-668420 | cmp-lg/9408019 | Building a Parser That can Afford to Interact with Semantics | <|reference_start|>Building a Parser That can Afford to Interact with Semantics: Natural language understanding programs get bogged down by the multiplicity of possible syntactic structures while processing real world texts that human understanders do not have much difficulty with. In this work, I analyze the relationships between parsing strategies, the degree of local ambiguity encountered by them, and semantic feedback to syntax, and propose a parsing algorithm called {\em Head-Signaled Left Corner Parsing} (HSLC) that minimizes local ambiguities while supporting interactive syntactic and semantic analysis. Such a parser has been implemented in a sentence understanding program called COMPERE.<|reference_end|> | arxiv | @article{mahesh1994building,
title={Building a Parser That can Afford to Interact with Semantics},
author={Kavi Mahesh (Georgia Institute of Technology)},
journal={appeared in AAAI-94 Proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408019},
primaryClass={cmp-lg cs.CL}
} | mahesh1994building |
arxiv-668421 | cmp-lg/9408020 | Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing | <|reference_start|>Having Your Cake and Eating It Too: Autonomy and Interaction in a Model of Sentence Processing: Is the human language understander a collection of modular processes operating with relative autonomy, or is it a single integrated process? This ongoing debate has polarized the language processing community, with two fundamentally different types of model posited, and with each camp concluding that the other is wrong. One camp puts forth a model with separate processors and distinct knowledge sources to explain one body of data, and the other proposes a model with a single processor and a homogeneous, monolithic knowledge source to explain the other body of data. In this paper we argue that a hybrid approach which combines a unified processor with separate knowledge sources provides an explanation of both bodies of data, and we demonstrate the feasibility of this approach with the computational model called COMPERE. We believe that this approach brings the language processing community significantly closer to offering human-like language processing systems.<|reference_end|> | arxiv | @article{eiselt1994having,
title={Having Your Cake and Eating It Too: Autonomy and Interaction in a Model
of Sentence Processing},
author={Kurt P. Eiselt (College of Computing, Georgia Tech), Kavi Mahesh
(College of Computing, Georgia Tech), and Jennifer K. Holbrook (Dept. of
Psychology, Albion College)},
journal={appeared in AAAI-93 Proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408020},
primaryClass={cmp-lg cs.CL}
} | eiselt1994having |
arxiv-668422 | cmp-lg/9408021 | A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding | <|reference_start|>A Unified Process Model of Syntactic and Semantic Error Recovery in Sentence Understanding: The development of models of human sentence processing has traditionally followed one of two paths. Either the model posited a sequence of processing modules, each with its own task-specific knowledge (e.g., syntax and semantics), or it posited a single processor utilizing different types of knowledge inextricably integrated into a monolithic knowledge base. Our previous work in modeling the sentence processor resulted in a model in which different processing modules used separate knowledge sources but operated in parallel to arrive at the interpretation of a sentence. One highlight of this model is that it offered an explanation of how the sentence processor might recover from an error in choosing the meaning of an ambiguous word. Recent experimental work by Laurie Stowe strongly suggests that the human sentence processor deals with syntactic error recovery using a mechanism very much like that proposed by our model of semantic error recovery. Another way to interpret Stowe's finding is this: the human sentence processor consists of a single unified processing module utilizing multiple independent knowledge sources in parallel. A sentence processor built upon this architecture should at times exhibit behavior associated with modular approaches, and at other times act like an integrated system. In this paper we explore some of these ideas via a prototype computational model of sentence processing called COMPERE, and propose a set of psychological experiments for testing our theories.<|reference_end|> | arxiv | @article{holbrook1994a,
title={A Unified Process Model of Syntactic and Semantic Error Recovery in
Sentence Understanding},
author={Jennifer K. Holbrook (Dept. of Psychology, Albion College), Kurt P.
Eiselt (College of Computing, Georgia Tech), and Kavi Mahesh (College of
Computing, Georgia Tech)},
journal={appeared in Cogsci-92 Conference Proceedings},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9408021},
primaryClass={cmp-lg cs.CL}
} | holbrook1994a |
arxiv-668423 | cmp-lg/9409001 | Integrating Knowledge Bases and Statistics in MT | <|reference_start|>Integrating Knowledge Bases and Statistics in MT: We summarize recent machine translation (MT) research at the Information Sciences Institute of USC, and we describe its application to the development of a Japanese-English newspaper MT system. Our work aims at scaling up grammar-based, knowledge-based MT techniques. This scale-up involves the use of statistical methods, both in acquiring effective knowledge resources and in making reasonable linguistic choices in the face of knowledge gaps.<|reference_end|> | arxiv | @article{knight1994integrating,
title={Integrating Knowledge Bases and Statistics in MT},
author={Kevin Knight (USC/ISI), Ishwar Chander (USC/ISI), Matthew Haines
(USC/ISI), Vasileios Hatzivassiloglou (Columbia Univ.), Eduard Hovy
(USC/ISI), Masayo Iida (USC/ISI), Steve K. Luk (USC/ISI), Akitoshi Okumura
(NEC), Richard Whitney (USC/ISI), Kenji Yamada (USC/ISI)},
journal={Proc Association for Machine Translation in the Americas (AMTA-94)},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409001},
primaryClass={cmp-lg cs.CL}
} | knight1994integrating |
arxiv-668424 | cmp-lg/9409002 | Conceptual Association for Compound Noun Analysis | <|reference_start|>Conceptual Association for Compound Noun Analysis: This paper describes research toward the automatic interpretation of compound nouns using corpus statistics. An initial study aimed at syntactic disambiguation is presented. The approach presented bases associations upon thesaurus categories. Association data is gathered from unambiguous cases extracted from a corpus and is then applied to the analysis of ambiguous compound nouns. While the work presented is still in progress, a first attempt to syntactically analyse a test set of 244 examples shows 75% correctness. Future work is aimed at improving this accuracy and extending the technique to assign semantic role information, thus producing a complete interpretation.<|reference_end|> | arxiv | @article{lauer1994conceptual,
title={Conceptual Association for Compound Noun Analysis},
author={Mark Lauer (Microsoft Institute, Sydney)},
journal={Proceedings of the Student Session, 32nd Annual Meeting of the
Association for Computational Linguistics, Las Cruces, NM., 1994 pp337-339},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409002},
primaryClass={cmp-lg cs.CL}
} | lauer1994conceptual |
arxiv-668425 | cmp-lg/9409003 | A Probabilistic Model of Compound Nouns | <|reference_start|>A Probabilistic Model of Compound Nouns: Compound nouns such as example noun compound are becoming more common in natural language and pose a number of difficult problems for NLP systems, notably increasing the complexity of parsing. In this paper we develop a probabilistic model for syntactically analysing such compounds. The model predicts compound noun structures based on knowledge of affinities between nouns, which can be acquired from a corpus. Problems inherent in this corpus-based approach are addressed: data sparseness is overcome by the use of semantically motivated word classes and sense ambiguity is explicitly handled in the model. An implementation based on this model is described in Lauer (1994) and correctly parses 77% of the test set.<|reference_end|> | arxiv | @article{lauer1994a,
title={A Probabilistic Model of Compound Nouns},
author={Mark Lauer (Microsoft Institute, Sydney), Mark Dras (Microsoft
Institute, Sydney)},
journal={7th Australian Joint Conference on AI, 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409003},
primaryClass={cmp-lg cs.CL}
} | lauer1994a |
arxiv-668426 | cmp-lg/9409004 | An Experiment on Learning Appropriate Selectional Restrictions from a Parsed Corpus | <|reference_start|>An Experiment on Learning Appropriate Selectional Restrictions from a Parsed Corpus: We present a methodology to extract Selectional Restrictions at a variable level of abstraction from phrasally analyzed corpora. The method relays in the use of a wide-coverage noun taxonomy and a statistical measure of the co-occurrence of linguistic items. Some experimental results about the performance of the method are provided.<|reference_end|> | arxiv | @article{ribas1994an,
title={An Experiment on Learning Appropriate Selectional Restrictions from a
Parsed Corpus},
author={Francesc Ribas (Universitat Politecnica de Catalunya)},
journal={COLING-94 Proceedings, 769-774},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409004},
primaryClass={cmp-lg cs.CL}
} | ribas1994an |
arxiv-668427 | cmp-lg/9409005 | Focusing for Pronoun Resolution in English Discourse: An Implementation | <|reference_start|>Focusing for Pronoun Resolution in English Discourse: An Implementation: Anaphora resolution is one of the most active research areas in natural language processing. This study examines focusing as a tool for the resolution of pronouns which are a kind of anaphora. Focusing is a discourse phenomenon like anaphora. Candy Sidner formalized focusing in her 1979 MIT PhD thesis and devised several algorithms to resolve definite anaphora including pronouns. She presented her theory in a computational framework but did not generally implement the algorithms. Her algorithms related to focusing and pronoun resolution are implemented in this thesis. This implementation provides a better comprehension of the theory both from a conceptual and a computational point of view. The resulting program is tested on different discourse segments, and evaluation and analysis of the experiments are presented together with the statistical results.<|reference_end|> | arxiv | @article{ersan1994focusing,
title={Focusing for Pronoun Resolution in English Discourse: An Implementation},
author={Ebru Ersan (Brown University) and Varol Akman (Bilkent University,
Ankara)},
journal={arXiv preprint arXiv:cmp-lg/9409005},
year={1994},
number={BU-CEIS-94-29},
archivePrefix={arXiv},
eprint={cmp-lg/9409005},
primaryClass={cmp-lg cs.CL}
} | ersan1994focusing |
arxiv-668428 | cmp-lg/9409006 | Situated Modeling of Epistemic Puzzles | <|reference_start|>Situated Modeling of Epistemic Puzzles: Situation theory is a mathematical theory of meaning introduced by Jon Barwise and John Perry. It has evoked great theoretical and practical interest and motivated the framework of a few `computational' systems. PROSIT is the pioneering work in this direction. Unfortunately, there is a lack of real-life applications on these systems and this study is a preliminary attempt to remedy this deficiency. Here, we examine how much PROSIT reflects situation-theoretic concepts and solve a group of epistemic puzzles using the constructs provided by this programming language.<|reference_end|> | arxiv | @article{ersan1994situated,
title={Situated Modeling of Epistemic Puzzles},
author={Murat Ersan (Brown University) and Varol Akman (Bilkent University,
Ankara)},
journal={arXiv preprint arXiv:cmp-lg/9409006},
year={1994},
number={BU-CEIS-94-30},
archivePrefix={arXiv},
eprint={cmp-lg/9409006},
primaryClass={cmp-lg cs.CL}
} | ersan1994situated |
arxiv-668429 | cmp-lg/9409007 | Treating `Free Word Order' in Machine Translation | <|reference_start|>Treating `Free Word Order' in Machine Translation: In `free word order' languages, every sentence is embedded in its specific context. Among others, the order of constituents is determined by the categories `theme', `rheme' and `contrastive focus'. This paper shows how to recognise and to translate these categories automatically on a sentential basis, so that sentence embedding can be achieved without having to refer to the context. Modifier classes, which are traditionally neglected in linguistic description, are fully covered by the proposed method. (Coling 94, Kyoto, Vol. I, pages 69-75)<|reference_end|> | arxiv | @article{steinberger1994treating,
title={Treating `Free Word Order' in Machine Translation},
author={Ralf Steinberger (UMIST, Manchester, Uk)},
journal={arXiv preprint arXiv:cmp-lg/9409007},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409007},
primaryClass={cmp-lg cs.CL}
} | steinberger1994treating |
arxiv-668430 | cmp-lg/9409008 | Parsing of Spoken Language under Time Constraints | <|reference_start|>Parsing of Spoken Language under Time Constraints: Spoken language applications in natural dialogue settings place serious requirements on the choice of processing architecture. Especially under adverse phonetic and acoustic conditions parsing procedures have to be developed which do not only analyse the incoming speech in a time-synchroneous and incremental manner, but which are able to schedule their resources according to the varying conditions of the recognition process. Depending on the actual degree of local ambiguity the parser has to select among the available constraints in order to narrow down the search space with as little effort as possible. A parsing approach based on constraint satisfaction techniques is discussed. It provides important characteristics of the desired real-time behaviour and attempts to mimic some of the attention focussing capabilities of the human speech comprehension mechanism.<|reference_end|> | arxiv | @article{menzel1994parsing,
title={Parsing of Spoken Language under Time Constraints},
author={Wolfgang Menzel (University of Hamburg, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9409008},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409008},
primaryClass={cmp-lg cs.CL}
} | menzel1994parsing |
arxiv-668431 | cmp-lg/9409009 | Linguistics Computation, Automatic Model Generation, and Intensions | <|reference_start|>Linguistics Computation, Automatic Model Generation, and Intensions: Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computational Linguistics, Reasoning Models, G-diagrams For Models, Dynamic Model Implementation, Linguistics and Logics For Artificial Intelligence<|reference_end|> | arxiv | @article{nourani1994linguistics,
title={Linguistics Computation, Automatic Model Generation, and Intensions},
author={Cyrus F. Nourani},
journal={arXiv preprint arXiv:cmp-lg/9409009},
year={1994},
number={METAAI-93-01},
archivePrefix={arXiv},
eprint={cmp-lg/9409009},
primaryClass={cmp-lg cs.CL}
} | nourani1994linguistics |
arxiv-668432 | cmp-lg/9409010 | Inducing Probabilistic Grammars by Bayesian Model Merging | <|reference_start|>Inducing Probabilistic Grammars by Bayesian Model Merging: We describe a framework for inducing probabilistic grammars from corpora of positive samples. First, samples are {\em incorporated} by adding ad-hoc rules to a working grammar; subsequently, elements of the model (such as states or nonterminals) are {\em merged} to achieve generalization and a more compact representation. The choice of what to merge and when to stop is governed by the Bayesian posterior probability of the grammar given the data, which formalizes a trade-off between a close fit to the data and a default preference for simpler models (`Occam's Razor'). The general scheme is illustrated using three types of probabilistic grammars: Hidden Markov models, class-based $n$-grams, and stochastic context-free grammars.<|reference_end|> | arxiv | @article{stolcke1994inducing,
title={Inducing Probabilistic Grammars by Bayesian Model Merging},
author={Andreas Stolcke (ICSI, Berkeley, CA), Stephen M. Omohundro (ICSI,
Berkeley, CA)},
journal={Grammatical Inference and Applications. ICGI 1994, pp. 106-118,
Sept. 1994},
year={1994},
doi={10.1007/3-540-58473-0_141},
archivePrefix={arXiv},
eprint={cmp-lg/9409010},
primaryClass={cmp-lg cs.CL}
} | stolcke1994inducing |
arxiv-668433 | cmp-lg/9409011 | Aligning Noisy Parallel Corpora Across Language Groups : Word Pair Feature Matching by Dynamic Time Warping | <|reference_start|>Aligning Noisy Parallel Corpora Across Language Groups : Word Pair Feature Matching by Dynamic Time Warping: We propose a new algorithm called DK-vec for aligning pairs of Asian/Indo-European noisy parallel texts without sentence boundaries. DK-vec improves on previous alignment algorithms in that it handles better the non-linear nature of noisy corpora. The algorithm uses frequency, position and recency information as features for pattern matching. Dynamic Time Warping is used as the matching technique between word pairs. This algorithm produces a small bilingual lexicon which provides anchor points for alignment.<|reference_end|> | arxiv | @article{fung1994aligning,
title={Aligning Noisy Parallel Corpora Across Language Groups : Word Pair
Feature Matching by Dynamic Time Warping},
author={Pascale Fung (Columbia University), Kathleen McKeown (Columbia
University)},
journal={Proc. AMTA-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409011},
primaryClass={cmp-lg cs.CL}
} | fung1994aligning |
arxiv-668434 | cmp-lg/9409012 | Towards an Automatic Dictation System for Translators: the TransTalk Project | <|reference_start|>Towards an Automatic Dictation System for Translators: the TransTalk Project: Professional translators often dictate their translations orally and have them typed afterwards. The TransTalk project aims at automating the second part of this process. Its originality as a dictation system lies in the fact that both the acoustic signal produced by the translator and the source text under translation are made available to the system. Probable translations of the source text can be predicted and these predictions used to help the speech recognition system in its lexical choices. We present the results of the first prototype, which show a marked improvement in the performance of the speech recognition task when translation predictions are taken into account.<|reference_end|> | arxiv | @article{dymetman1994towards,
title={Towards an Automatic Dictation System for Translators: the TransTalk
Project},
author={Marc Dymetman, Julie Brousseau, George Foster, Pierre Isabelle, Yves
Normandin, Pierre Plamondon},
journal={arXiv preprint arXiv:cmp-lg/9409012},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9409012},
primaryClass={cmp-lg cs.CL}
} | dymetman1994towards |
arxiv-668435 | cmp-lg/9410001 | Improving Language Models by Clustering Training Sentences | <|reference_start|>Improving Language Models by Clustering Training Sentences: Many of the kinds of language model used in speech understanding suffer from imperfect modeling of intra-sentential contextual influences. I argue that this problem can be addressed by clustering the sentences in a training corpus automatically into subcorpora on the criterion of entropy reduction, and calculating separate language model parameters for each cluster. This kind of clustering offers a way to represent important contextual effects and can therefore significantly improve the performance of a model. It also offers a reasonably automatic means to gather evidence on whether a more complex, context-sensitive model using the same general kind of linguistic information is likely to reward the effort that would be required to develop it: if clustering improves the performance of a model, this proves the existence of further context dependencies, not exploited by the unclustered model. As evidence for these claims, I present results showing that clustering improves some models but not others for the ATIS domain. These results are consistent with other findings for such models, suggesting that the existence or otherwise of an improvement brought about by clustering is indeed a good pointer to whether it is worth developing further the unclustered model.<|reference_end|> | arxiv | @article{carter1994improving,
title={Improving Language Models by Clustering Training Sentences},
author={David Carter (SRI International, Cambridge, UK)},
journal={arXiv preprint arXiv:cmp-lg/9410001},
year={1994},
number={SRI-CRC-045},
archivePrefix={arXiv},
eprint={cmp-lg/9410001},
primaryClass={cmp-lg cs.CL}
} | carter1994improving |
arxiv-668436 | cmp-lg/9410002 | Lexikoneintraege fuer deutsche Adverbien (Dictionary Entries for German Adverbs) | <|reference_start|>Lexikoneintraege fuer deutsche Adverbien (Dictionary Entries for German Adverbs): Modifiers in general, and adverbs in particular, are neglected categories in linguistics, and consequently, their treatment in Natural Language Processing poses problems. In this article, we present the dictionary information for German adverbs which is necessary to deal with word order, degree modifier scope and other problems in NLP. We also give evidence for the claim that a classification according to position classes differs from any semantic classification.<|reference_end|> | arxiv | @article{steinberger1994lexikoneintraege,
title={Lexikoneintraege fuer deutsche Adverbien (Dictionary Entries for German
Adverbs)},
author={Ralf Steinberger (UMIST, Manchester, Uk)},
journal={arXiv preprint arXiv:cmp-lg/9410002},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410002},
primaryClass={cmp-lg cs.CL}
} | steinberger1994lexikoneintraege |
arxiv-668437 | cmp-lg/9410003 | Principle Based Semantics for HPSG | <|reference_start|>Principle Based Semantics for HPSG: The paper presents a constraint based semantic formalism for HPSG. The advantages of the formlism are shown with respect to a grammar for a fragment of German that deals with (i) quantifier scope ambiguities triggered by scrambling and/or movement and (ii) ambiguities that arise from the collective/distributive distinction of plural NPs. The syntax-semantics interface directly implements syntactic conditions on quantifier scoping and distributivity. The construction of semantic representations is guided by general principles governing the interaction between syntax and semantics. Each of these principles acts as a constraint to narrow down the set of possible interpretations of a sentence. Meanings of ambiguous sentences are represented by single partial representations (so-called U(nderspecified) D(iscourse) R(epresentation) S(tructure)s) to which further constraints can be added monotonically to gain more information about the content of a sentence. There is no need to build up a large number of alternative representations of the sentence which are then filtered by subsequent discourse and world knowledge. The advantage of UDRSs is not only that they allow for monotonic incremental interpretation but also that they are equipped with truth conditions and a proof theory that allows for inferences to be drawn directly on structures where quantifier scope is not resolved.<|reference_end|> | arxiv | @article{frank1994principle,
title={Principle Based Semantics for HPSG},
author={A. Frank and U. Reyle (Institute for Computational Linguistics,
University of Stuttgart)},
journal={arXiv preprint arXiv:cmp-lg/9410003},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410003},
primaryClass={cmp-lg cs.CL}
} | frank1994principle |
arxiv-668438 | cmp-lg/9410004 | Spelling Correction in Agglutinative Languages | <|reference_start|>Spelling Correction in Agglutinative Languages: This paper presents an approach to spelling correction in agglutinative languages that is based on two-level morphology and a dynamic programming based search algorithm. Spelling correction in agglutinative languages is significantly different than in languages like English. The concept of a word in such languages is much wider that the entries found in a dictionary, owing to {}~productive word formation by derivational and inflectional affixations. After an overview of certain issues and relevant mathematical preliminaries, we formally present the problem and our solution. We then present results from our experiments with spelling correction in Turkish, a Ural--Altaic agglutinative language. Our results indicate that we can find the intended correct word in 95\% of the cases and offer it as the first candidate in 74\% of the cases, when the edit distance is 1.<|reference_end|> | arxiv | @article{oflazer1994spelling,
title={Spelling Correction in Agglutinative Languages},
author={Kemal Oflazer},
journal={arXiv preprint arXiv:cmp-lg/9410004},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410004},
primaryClass={cmp-lg cs.CL}
} | oflazer1994spelling |
arxiv-668439 | cmp-lg/9410005 | A Centering Approach to Pronouns | <|reference_start|>A Centering Approach to Pronouns: In this paper we present a formalization of the centering approach to modeling attentional structure in discourse and use it as the basis for an algorithm to track discourse context and bind pronouns. As described in Grosz, Joshi and Weinstein (1986), the process of centering attention on entities in the discourse gives rise to the intersentential transitional states of continuing, retaining and shifting. We propose an extension to these states which handles some additional cases of multiple ambiguous pronouns. The algorithm has been implemented in an HPSG natural language system which serves as the interface to a database query application.<|reference_end|> | arxiv | @article{brennan1994a,
title={A Centering Approach to Pronouns},
author={Susan E. Brennan, Marilyn Walker Friedman, Carl J. Pollard},
journal={Association of Computational Linguistics 1987, p. 155-162},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410005},
primaryClass={cmp-lg cs.CL}
} | brennan1994a |
arxiv-668440 | cmp-lg/9410006 | Evaluating Discourse Processing Algorithms | <|reference_start|>Evaluating Discourse Processing Algorithms: In order to take steps towards establishing a methodology for evaluating Natural Language systems, we conducted a case study. We attempt to evaluate two different approaches to anaphoric processing in discourse by comparing the accuracy and coverage of two published algorithms for finding the co-specifiers of pronouns in naturally occurring texts and dialogues. We present the quantitative results of hand-simulating these algorithms, but this analysis naturally gives rise to both a qualitative evaluation and recommendations for performing such evaluations in general. We illustrate the general difficulties encountered with quantitative evaluation. These are problems with: (a) allowing for underlying assumptions, (b) determining how to handle underspecifications, and (c) evaluating the contribution of false positives and error chaining.<|reference_end|> | arxiv | @article{walker1994evaluating,
title={Evaluating Discourse Processing Algorithms},
author={Marilyn A. Walker},
journal={Association of Computational Linguistics, 1989, p. 251-262},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410006},
primaryClass={cmp-lg cs.CL}
} | walker1994evaluating |
arxiv-668441 | cmp-lg/9410007 | A Formal Look at Dependency Grammars and Phrase-Structure Grammars, with Special Consideration of Word-Order Phenomena | <|reference_start|>A Formal Look at Dependency Grammars and Phrase-Structure Grammars, with Special Consideration of Word-Order Phenomena: The central role of the lexicon in Meaning-Text Theory (MTT) and other dependency-based linguistic theories cannot be replicated in linguistic theories based on context-free grammars (CFGs). We describe Tree Adjoining Grammar (TAG) as a system that arises naturally in the process of lexicalizing CFGs. A TAG grammar can therefore be compared directly to an Meaning-Text Model (MTM). We illustrate this point by discussing the computational complexity of certain non-projective constructions, and suggest a way of incorporating locality of word-order definitions into the Surface-Syntactic Component of MTT.<|reference_end|> | arxiv | @article{rambow1994a,
title={A Formal Look at Dependency Grammars and Phrase-Structure Grammars, with
Special Consideration of Word-Order Phenomena},
author={Owen Rambow (Paris 7) and Aravind Joshi (U. Penn.)},
journal={arXiv preprint arXiv:cmp-lg/9410007},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410007},
primaryClass={cmp-lg cs.CL}
} | rambow1994a |
arxiv-668442 | cmp-lg/9410008 | Recognizing Text Genres with Simple Metrics Using Discriminant Analysis | <|reference_start|>Recognizing Text Genres with Simple Metrics Using Discriminant Analysis: A simple method for categorizing texts into predetermined text genre categories using the statistical standard technique of discriminant analysis is demonstrated with application to the Brown corpus. Discriminant analysis makes it possible use a large number of parameters that may be specific for a certain corpus or information stream, and combine them into a small number of functions, with the parameters weighted on basis of how useful they are for discriminating text genres. An application to information retrieval is discussed.<|reference_end|> | arxiv | @article{karlgren1994recognizing,
title={Recognizing Text Genres with Simple Metrics Using Discriminant Analysis},
author={Jussi Karlgren (SICS), Douglass Cutting (Apple)},
journal={arXiv preprint arXiv:cmp-lg/9410008},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410008},
primaryClass={cmp-lg cs.CL}
} | karlgren1994recognizing |
arxiv-668443 | cmp-lg/9410009 | Lexical Functions and Machine Translation | <|reference_start|>Lexical Functions and Machine Translation: This paper discusses the lexicographical concept of lexical functions and their potential exploitation in the development of a machine translation lexicon designed to handle collocations.<|reference_end|> | arxiv | @article{heylen1994lexical,
title={Lexical Functions and Machine Translation},
author={Dirk Heylen, Kerry G. Maxwell, Marc Verhagen},
journal={arXiv preprint arXiv:cmp-lg/9410009},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410009},
primaryClass={cmp-lg cs.CL}
} | heylen1994lexical |
arxiv-668444 | cmp-lg/9410010 | XTAG system - A Wide Coverage Grammar for English | <|reference_start|>XTAG system - A Wide Coverage Grammar for English: This paper presents the XTAG system, a grammar development tool based on the Tree Adjoining Grammar (TAG) formalism that includes a wide-coverage syntactic grammar for English. The various components of the system are discussed and preliminary evaluation results from the parsing of various corpora are given. Results from the comparison of XTAG against the IBM statistical parser and the Alvey Natural Language Tool parser are also given.<|reference_end|> | arxiv | @article{doran1994xtag,
title={XTAG system - A Wide Coverage Grammar for English},
author={Christy Doran, Dania Egedi, Beth Ann Hockey, B. Srinivas, and Martin
Zaidel (University of Pennsylvania)},
journal={Proceedings of the 15th International Conference on Computational
Linguistics (COLING 94), Kyoto, Japan, August 1994, pp. 922-928},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410010},
primaryClass={cmp-lg cs.CL}
} | doran1994xtag |
arxiv-668445 | cmp-lg/9410011 | Dilemma - An Instant Lexicographer | <|reference_start|>Dilemma - An Instant Lexicographer: Dilemma is intended to enhance quality and increase productivity of expert human translators by presenting to the writer relevant lexical information mechanically extracted from comparable existing translations, thus replacing - or compensating for the absence of - a lexicographer and stand-by terminologist rather than the translator. Using statistics and crude surface analysis and a minimum of prior information, Dilemma identifies instances and suggests their counterparts in parallel source and target texts, on all levels down to individual words. Dilemma forms part of a tool kit for translation where focus is on text structure and over-all consistency in large text volumes rather than on framing sentences, on interaction between many actors in a large project rather than on retrieval of machine-stored data and on decision making rather than on application of given rules. In particular, the system has been tuned to the needs of the ongoing translation of European Community legislation into the languages of candidate member countries. The system has been demonstrated to and used by professional translators with promising results.<|reference_end|> | arxiv | @article{karlgren1994dilemma,
title={Dilemma - An Instant Lexicographer},
author={Hans Karlgren (Scandface), Jussi Karlgren (SICS), Magnus Nordstr"om
(SICS), Paul Pettersson (SICS), Bengt Wahrol'en (Scandface)},
journal={arXiv preprint arXiv:cmp-lg/9410011},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410011},
primaryClass={cmp-lg cs.CL}
} | karlgren1994dilemma |
arxiv-668446 | cmp-lg/9410012 | Does Baum-Welch Re-estimation Help Taggers? | <|reference_start|>Does Baum-Welch Re-estimation Help Taggers?: In part of speech tagging by Hidden Markov Model, a statistical model is used to assign grammatical categories to words in a text. Early work in the field relied on a corpus which had been tagged by a human annotator to train the model. More recently, Cutting {\it et al.} (1992) suggest that training can be achieved with a minimal lexicon and a limited amount of {\em a priori} information about probabilities, by using Baum-Welch re-estimation to automatically refine the model. In this paper, I report two experiments designed to determine how much manual training information is needed. The first experiment suggests that initial biasing of either lexical or transition probabilities is essential to achieve a good accuracy. The second experiment reveals that there are three distinct patterns of Baum-Welch re-estimation. In two of the patterns, the re-estimation ultimately reduces the accuracy of the tagging rather than improving it. The pattern which is applicable can be predicted from the quality of the initial model and the similarity between the tagged training corpus (if any) and the corpus to be tagged. Heuristics for deciding how to use re-estimation in an effective manner are given. The conclusions are broadly in agreement with those of Merialdo (1994), but give greater detail about the contributions of different parts of the model.<|reference_end|> | arxiv | @article{elworthy1994does,
title={Does Baum-Welch Re-estimation Help Taggers?},
author={David Elworthy (Sharp Laboratories of Europe Ltd)},
journal={arXiv preprint arXiv:cmp-lg/9410012},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410012},
primaryClass={cmp-lg cs.CL}
} | elworthy1994does |
arxiv-668447 | cmp-lg/9410013 | Automatic Error Detection in Part of Speech Tagging | <|reference_start|>Automatic Error Detection in Part of Speech Tagging: A technique for detecting errors made by Hidden Markov Model taggers is described, based on comparing observable values of the tagging process with a threshold. The resulting approach allows the accuracy of the tagger to be improved by accepting a lower efficiency, defined as the proportion of words which are tagged. Empirical observations are presented which demonstrate the validity of the technique and suggest how to choose an appropriate threshold.<|reference_end|> | arxiv | @article{elworthy1994automatic,
title={Automatic Error Detection in Part of Speech Tagging},
author={David Elworthy (Sharp Laboratories of Europe)},
journal={arXiv preprint arXiv:cmp-lg/9410013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410013},
primaryClass={cmp-lg cs.CL}
} | elworthy1994automatic |
arxiv-668448 | cmp-lg/9410014 | A Freely Available Syntactic Lexicon for English | <|reference_start|>A Freely Available Syntactic Lexicon for English: This paper presents a syntactic lexicon for English that was originally derived from the Oxford Advanced Learner's Dictionary and the Oxford Dictionary of Current Idiomatic English, and then modified and augmented by hand. There are more than 37,000 syntactic entries from all 8 parts of speech. An X-windows based tool is available for maintaining the lexicon and performing searches. C and Lisp hooks are also available so that the lexicon can be easily utilized by parsers and other programs.<|reference_end|> | arxiv | @article{egedi1994a,
title={A Freely Available Syntactic Lexicon for English},
author={Dania Egedi, Patrick Martin (University of Pennsylvania)},
journal={Proceedings of the International Workshop on Sharable Natural
Language Resources, Nara, Japan, August 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410014},
primaryClass={cmp-lg cs.CL}
} | egedi1994a |
arxiv-668449 | cmp-lg/9410015 | Lexicalization and Grammar Development | <|reference_start|>Lexicalization and Grammar Development: In this paper we present a fully lexicalized grammar formalism as a particularly attractive framework for the specification of natural language grammars. We discuss in detail Feature-based, Lexicalized Tree Adjoining Grammars (FB-LTAGs), a representative of the class of lexicalized grammars. We illustrate the advantages of lexicalized grammars in various contexts of natural language processing, ranging from wide-coverage grammar development to parsing and machine translation. We also present a method for compact and efficient representation of lexicalized trees.<|reference_end|> | arxiv | @article{srinivas1994lexicalization,
title={Lexicalization and Grammar Development},
author={B. Srinivas, Dania Egedi, Christy Doran, Tilman Becker (University of
Pennsylvania)},
journal={Proceedings of KONVENS 94, Vienna, Austria, September 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410015},
primaryClass={cmp-lg cs.CL}
} | srinivas1994lexicalization |
arxiv-668450 | cmp-lg/9410016 | Dutch Cross Serial Dependencies in HPSG | <|reference_start|>Dutch Cross Serial Dependencies in HPSG: We present an analysis of Dutch cross serial dependencies in Head-driven Phrase Structure Grammar. Arguably, our analysis differs from other analyses in that we do not refer to `additional' mechanisms (e.g., sequence union, head wrapping): just standard structure sharing, an immediate dominance schema and a linear precedence rule.<|reference_end|> | arxiv | @article{rentier1994dutch,
title={Dutch Cross Serial Dependencies in HPSG},
author={Gerrit Rentier (ITK, Tilburg University)},
journal={arXiv preprint arXiv:cmp-lg/9410016},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410016},
primaryClass={cmp-lg cs.CL}
} | rentier1994dutch |
arxiv-668451 | cmp-lg/9410017 | Concurrent Lexicalized Dependency Parsing: The ParseTalk Model | <|reference_start|>Concurrent Lexicalized Dependency Parsing: The ParseTalk Model: A grammar model for concurrent, object-oriented natural language parsing is introduced. Complete lexical distribution of grammatical knowledge is achieved building upon the head-oriented notions of valency and dependency, while inheritance mechanisms are used to capture lexical generalizations. The underlying concurrent computation model relies upon the actor paradigm. We consider message passing protocols for establishing dependency relations and ambiguity handling.<|reference_end|> | arxiv | @article{broeker1994concurrent,
title={Concurrent Lexicalized Dependency Parsing: The ParseTalk Model},
author={Norbert Broeker, Udo Hahn, Susanne Schacht (Computational Linguisitcs
Research Group, Freiburg University, Germany)},
journal={Proc.15th Intl Conference on Computational Linguistics, Kyoto,
Japan, August 1994, pp.379-385},
year={1994},
number={CLIF Report 9/94},
archivePrefix={arXiv},
eprint={cmp-lg/9410017},
primaryClass={cmp-lg cs.CL}
} | broeker1994concurrent |
arxiv-668452 | cmp-lg/9410018 | Part-of-Speech Tagging with Neural Networks | <|reference_start|>Part-of-Speech Tagging with Neural Networks: Text corpora which are tagged with part-of-speech information are useful in many areas of linguistic research. In this paper, a new part-of-speech tagging method based on neural networks (Net- Tagger) is presented and its performance is compared to that of a HMM-tagger and a trigram-based tagger. It is shown that the Net- Tagger performs as well as the trigram-based tagger and better than the HMM-tagger.<|reference_end|> | arxiv | @article{schmid1994part-of-speech,
title={Part-of-Speech Tagging with Neural Networks},
author={Helmut Schmid (University of Stuttgart)},
journal={Coling-94, 172-176},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410018},
primaryClass={cmp-lg cs.CL}
} | schmid1994part-of-speech |
arxiv-668453 | cmp-lg/9410019 | Concurrent Lexicalized Dependency Parsing: A Behavioral View on ParseTalk Events | <|reference_start|>Concurrent Lexicalized Dependency Parsing: A Behavioral View on ParseTalk Events: The behavioral specification of an object-oriented grammar model is considered. The model is based on full lexicalization, head-orientation via valency constraints and dependency relations, inheritance as a means for non-redundant lexicon specification, and concurrency of computation. The computation model relies upon the actor paradigm, with concurrency entering through asynchronous message passing between actors. In particular, we here elaborate on principles of how the global behavior of a lexically distributed grammar and its corresponding parser can be specified in terms of event type networks and event networks, resp.<|reference_end|> | arxiv | @article{schacht1994concurrent,
title={Concurrent Lexicalized Dependency Parsing: A Behavioral View on
ParseTalk Events},
author={Susanne Schacht, Udo Hahn, Norbert Broeker (Computational Linguistics
Research Group, Freiburg University, Germany)},
journal={Proc.15th Intl Conference on Computational Linguistics, Kyoto,
Japan, August 1994, pp.498-493},
year={1994},
number={CLIF Report 9/94},
archivePrefix={arXiv},
eprint={cmp-lg/9410019},
primaryClass={cmp-lg cs.CL}
} | schacht1994concurrent |
arxiv-668454 | cmp-lg/9410020 | Construction of a Bilingual Dictionary Intermediated by a Third Language | <|reference_start|>Construction of a Bilingual Dictionary Intermediated by a Third Language: When using a third language to construct a bilingual dictionary, it is necessary to discriminate equivalencies from inappropriate words derived as a result of ambiguity in the third language. We propose a method to treat this by utilizing the structures of dictionaries to measure the nearness of the meanings of words. The resulting dictionary is a word-to-word bilingual dictionary of nouns and can be used to refine the entries and equivalencies in published bilingual dictionaries.<|reference_end|> | arxiv | @article{tanaka1994construction,
title={Construction of a Bilingual Dictionary Intermediated by a Third Language},
author={Kumiko TANAKA (Univ. Tokyo), Kyoji UMEMURA (NTT Basic Research
Laboratories)},
journal={arXiv preprint arXiv:cmp-lg/9410020},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410020},
primaryClass={cmp-lg cs.CL}
} | tanaka1994construction |
arxiv-668455 | cmp-lg/9410021 | Reference Resolution Using Semantic Patterns in Japanese Newspaper Articles | <|reference_start|>Reference Resolution Using Semantic Patterns in Japanese Newspaper Articles: Reference resolution is one of the important tasks in natural language processing. In this paper, the author first determines the referents and their locations of "dousha", literally meaning "the same company", which appear in Japanese newspaper articles. Secondly, three heuristic methods, two of which use semantic information in text such as company names and their patterns, are proposed and tested on how accurately they identify the correct referents. The proposed methods based on semantic patterns show high accuracy for reference resolution of "dousha" (more than 90\%). This suggests that semantic pattern-matching methods are effective for reference resolution in newspaper articles.<|reference_end|> | arxiv | @article{wakao1994reference,
title={Reference Resolution Using Semantic Patterns in Japanese Newspaper
Articles},
author={Takahiro Wakao},
journal={arXiv preprint arXiv:cmp-lg/9410021},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410021},
primaryClass={cmp-lg cs.CL}
} | wakao1994reference |
arxiv-668456 | cmp-lg/9410022 | Automated tone transcription | <|reference_start|>Automated tone transcription: In this paper I report on an investigation into the problem of assigning tones to pitch contours. The proposed model is intended to serve as a tool for phonologists working on instrumentally obtained pitch data from tone languages. Motivation and exemplification for the model is provided by data taken from my fieldwork on Bamileke Dschang (Cameroon). Following recent work by Liberman and others, I provide a parametrised F_0 prediction function P which generates F_0 values from a tone sequence, and I explore the asymptotic behaviour of downstep. Next, I observe that transcribing a sequence X of pitch (i.e. F_0) values amounts to finding a tone sequence T such that P(T) {}~= X. This is a combinatorial optimisation problem, for which two non-deterministic search techniques are provided: a genetic algorithm and a simulated annealing algorithm. Finally, two implementations---one for each technique---are described and then compared using both artificial and real data for sequences of up to 20 tones. These programs can be adapted to other tone languages by adjusting the F_0 prediction function.<|reference_end|> | arxiv | @article{bird1994automated,
title={Automated tone transcription},
author={Steven Bird (University of Edinburgh)},
journal={Proceedings of the First Meeting of the ACL Special},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410022},
primaryClass={cmp-lg cs.CL}
} | bird1994automated |
arxiv-668457 | cmp-lg/9410023 | Korean to English Translation Using Synchronous TAGs | <|reference_start|>Korean to English Translation Using Synchronous TAGs: It is often argued that accurate machine translation requires reference to contextual knowledge for the correct treatment of linguistic phenomena such as dropped arguments and accurate lexical selection. One of the historical arguments in favor of the interlingua approach has been that, since it revolves around a deep semantic representation, it is better able to handle the types of linguistic phenomena that are seen as requiring a knowledge-based approach. In this paper we present an alternative approach, exemplified by a prototype system for machine translation of English and Korean which is implemented in Synchronous TAGs. This approach is essentially transfer based, and uses semantic feature unification for accurate lexical selection of polysemous verbs. The same semantic features, when combined with a discourse model which stores previously mentioned entities, can also be used for the recovery of topicalized arguments. In this paper we concentrate on the translation of Korean to English.<|reference_end|> | arxiv | @article{egedi1994korean,
title={Korean to English Translation Using Synchronous TAGs},
author={Dania Egedi, Martha Palmer, Hyun S. Park, Aravind K. Joshi (University
of Pennsylvania)},
journal={Proceedings of AMTA 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410023},
primaryClass={cmp-lg cs.CL}
} | egedi1994korean |
arxiv-668458 | cmp-lg/9410024 | A Freely Available Wide Coverage Morphological Analyzer for English | <|reference_start|>A Freely Available Wide Coverage Morphological Analyzer for English: This paper presents a morphological lexicon for English that handles more than 317000 inflected forms derived from over 90000 stems. The lexicon is available in two formats. The first can be used by an implementation of a two-level processor for morphological analysis. The second, derived from the first one for efficiency reasons, consists of a disk-based database using a UNIX hash table facility. We also built an X Window tool to facilitate the maintenance and browsing of the lexicon. The package is ready to be integrated into an natural language application such as a parser through hooks written in Lisp and C.<|reference_end|> | arxiv | @article{karp1994a,
title={A Freely Available Wide Coverage Morphological Analyzer for English},
author={Daniel Karp (Stanford U.), Yves Schabes (U. Penn), Martin Zaidel (U.
Penn), and Dania Egedi (U. Penn)},
journal={Proceedings of Coling 92},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410024},
primaryClass={cmp-lg cs.CL}
} | karp1994a |
arxiv-668459 | cmp-lg/9410025 | Syntactic Analysis Of Natural Language Using Linguistic Rules And Corpus-based Patterns | <|reference_start|>Syntactic Analysis Of Natural Language Using Linguistic Rules And Corpus-based Patterns: We are concerned with the syntactic annotation of unrestricted text. We combine a rule-based analysis with subsequent exploitation of empirical data. The rule-based surface syntactic analyser leaves some amount of ambiguity in the output that is resolved using empirical patterns. We have implemented a system for generating and applying corpus-based patterns. Some patterns describe the main constituents in the sentence and some the local context of the each syntactic function. There are several (partly) reduntant patterns, and the ``pattern'' parser selects analysis of the sentence that matches the strictest possible pattern(s). The system is applied to an experimental corpus. We present the results and discuss possible refinements of the method from a linguistic point of view.<|reference_end|> | arxiv | @article{tapanainen1994syntactic,
title={Syntactic Analysis Of Natural Language Using Linguistic Rules And
Corpus-based Patterns},
author={Pasi Tapanainen (Rank Xerox Research Centre), Timo J"arvinen
(University of Helsinki)},
journal={arXiv preprint arXiv:cmp-lg/9410025},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410025},
primaryClass={cmp-lg cs.CL}
} | tapanainen1994syntactic |
arxiv-668460 | cmp-lg/9410026 | A Rule-Based Approach To Prepositional Phrase Attachment Disambiguation | <|reference_start|>A Rule-Based Approach To Prepositional Phrase Attachment Disambiguation: In this paper, we describe a new corpus-based approach to prepositional phrase attachment disambiguation, and present results comparing performance of this algorithm with other corpus-based approaches to this problem.<|reference_end|> | arxiv | @article{brill1994a,
title={A Rule-Based Approach To Prepositional Phrase Attachment Disambiguation},
author={Eric Brill (Johns Hopkins) and Philip Resnik (Sun Microsystems)},
journal={COLING 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410026},
primaryClass={cmp-lg cs.CL}
} | brill1994a |
arxiv-668461 | cmp-lg/9410027 | Probabilistic Tagging with Feature Structures | <|reference_start|>Probabilistic Tagging with Feature Structures: The described tagger is based on a hidden Markov model and uses tags composed of features such as part-of-speech, gender, etc. The contextual probability of a tag (state transition probability) is deduced from the contextual probabilities of its feature-value-pairs. This approach is advantageous when the available training corpus is small and the tag set large, which can be the case with morphologically rich languages.<|reference_end|> | arxiv | @article{kempe1994probabilistic,
title={Probabilistic Tagging with Feature Structures},
author={Andre Kempe (University of Stuttgart)},
journal={COLING-94, vol.1, pp.161-165, Kyoto, Japan. August 5-9, 1994.},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410027},
primaryClass={cmp-lg cs.CL}
} | kempe1994probabilistic |
arxiv-668462 | cmp-lg/9410028 | Minimal Change and Bounded Incremental Parsing | <|reference_start|>Minimal Change and Bounded Incremental Parsing: Ideally, the time that an incremental algorithm uses to process a change should be a function of the size of the change rather than, say, the size of the entire current input. Based on a formalization of ``the set of things changed'' by an incremental modification, this paper investigates how and to what extent it is possible to give such a guarantee for a chart-based parsing framework and discusses the general utility of a minimality notion in incremental processing.<|reference_end|> | arxiv | @article{wirén1994minimal,
title={Minimal Change and Bounded Incremental Parsing},
author={Mats Wir'en (Universitat des Saarlandes)},
journal={arXiv preprint arXiv:cmp-lg/9410028},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410028},
primaryClass={cmp-lg cs.CL}
} | wirén1994minimal |
arxiv-668463 | cmp-lg/9410029 | Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing | <|reference_start|>Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing: In a lexicalized grammar formalism such as Lexicalized Tree-Adjoining Grammar (LTAG), each lexical item is associated with at least one elementary structure (supertag) that localizes syntactic and semantic dependencies. Thus a parser for a lexicalized grammar must search a large set of supertags to choose the right ones to combine for the parse of the sentence. We present techniques for disambiguating supertags using local information such as lexical preference and local lexical dependencies. The similarity between LTAG and Dependency grammars is exploited in the dependency model of supertag disambiguation. The performance results for various models of supertag disambiguation such as unigram, trigram and dependency-based models are presented.<|reference_end|> | arxiv | @article{joshi1994disambiguation,
title={Disambiguation of Super Parts of Speech (or Supertags): Almost Parsing},
author={Aravind K. Joshi and B. Srinivas (University of Pennsylvania)},
journal={Proceedings of the 15th International Conference on Computational
Linguistics (COLING 94), Kyoto, Japan, August 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410029},
primaryClass={cmp-lg cs.CL}
} | joshi1994disambiguation |
arxiv-668464 | cmp-lg/9410030 | Feature-Based TAG in place of multi-component adjunction: Computational Implications | <|reference_start|>Feature-Based TAG in place of multi-component adjunction: Computational Implications: Using feature-based Tree Adjoining Grammar (TAG), this paper presents linguistically motivated analyses of constructions claimed to require multi-component adjunction. These feature-based TAG analyses permit parsing of these constructions using an existing unification-based Earley-style TAG parser, thus obviating the need for a multi-component TAG parser without sacrificing linguistic coverage for English.<|reference_end|> | arxiv | @article{hockey1994feature-based,
title={Feature-Based TAG in place of multi-component adjunction: Computational
Implications},
author={B.A. Hockey and B. Srinivas (University of Pennsylvania)},
journal={Natural Language Processing Pacific Rim Symposium (NLPRS 93)},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410030},
primaryClass={cmp-lg cs.CL}
} | hockey1994feature-based |
arxiv-668465 | cmp-lg/9410031 | Towards a More User-friendly Correction | <|reference_start|>Towards a More User-friendly Correction: We first present our view of detection and correction of syntactic errors. We then introduce a new correction method, based on heuristic criteria used to decide which correction should be preferred. Weighting of these criteria leads to a flexible and parametrable system, which can adapt itself to the user. A partitioning of the trees based on linguistic criteria: agreement rules, rather than computational criteria is then necessary. We end by proposing extensions to lexical correction and to some syntactic errors. Our aim is an adaptable and user-friendly system capable of automatic correction for some applications.<|reference_end|> | arxiv | @article{genthial1994towards,
title={Towards a More User-friendly Correction},
author={Damien Genthial, Jacques Courtin, Jacques Menezo Equipe Trilan
(LGI-Imag Campus, BP 53, Grenoble)},
journal={arXiv preprint arXiv:cmp-lg/9410031},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410031},
primaryClass={cmp-lg cs.CL}
} | genthial1994towards |
arxiv-668466 | cmp-lg/9410032 | Planning Argumentative Texts | <|reference_start|>Planning Argumentative Texts: This paper presents \proverb\, a text planner for argumentative texts. \proverb\'s main feature is that it combines global hierarchical planning and unplanned organization of text with respect to local derivation relations in a complementary way. The former splits the task of presenting a particular proof into subtasks of presenting subproofs. The latter simulates how the next intermediate conclusion to be presented is chosen under the guidance of the local focus.<|reference_end|> | arxiv | @article{huang1994planning,
title={Planning Argumentative Texts},
author={Xiaorong Huang (Fachbereich Informatik, Universit"at des Saarlandes,
Saarbr"ucken, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9410032},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9410032},
primaryClass={cmp-lg cs.CL}
} | huang1994planning |
arxiv-668467 | cmp-lg/9410033 | Default Handling in Incremental Generation | <|reference_start|>Default Handling in Incremental Generation: Natural language generation must work with insufficient input. Underspecifications can be caused by shortcomings of the component providing the input or by the preliminary state of incrementally given input. The paper aims to escape from such dead-end situations by making assumptions. We discuss global aspects of default handling. Two problem classes for defaults in the incremental syntactic generator VM-GEN are presented to substantiate our discussion.<|reference_end|> | arxiv | @article{harbusch1994default,
title={Default Handling in Incremental Generation},
author={Karin Harbusch (DFKI), Gen-ichiro Kikui (ATR), Anne Kilger (DFKI)},
journal={arXiv preprint arXiv:cmp-lg/9410033},
year={1994},
number={VM-Report 22},
archivePrefix={arXiv},
eprint={cmp-lg/9410033},
primaryClass={cmp-lg cs.CL}
} | harbusch1994default |
arxiv-668468 | cmp-lg/9410034 | A Comparison of Two Smoothing Methods for Word Bigram Models | <|reference_start|>A Comparison of Two Smoothing Methods for Word Bigram Models: A COMPARISON OF TWO SMOOTHING METHODS FOR WORD BIGRAM MODELS Linda Bauman Peto Department of Computer Science University of Toronto Abstract Word bigram models estimated from text corpora require smoothing methods to estimate the probabilities of unseen bigrams. The deleted estimation method uses the formula: Pr(i|j) = lambda f_i + (1-lambda)f_i|j, where f_i and f_i|j are the relative frequency of i and the conditional relative frequency of i given j, respectively, and lambda is an optimized parameter. MacKay (1994) proposes a Bayesian approach using Dirichlet priors, which yields a different formula: Pr(i|j) = (alpha/F_j + alpha) m_i + (1 - alpha/F_j + alpha) f_i|j where F_j is the count of j and alpha and m_i are optimized parameters. This thesis describes an experiment in which the two methods were trained on a two-million-word corpus taken from the Canadian _Hansard_ and compared on the basis of the experimental perplexity that they assigned to a shared test corpus. The methods proved to be about equally accurate, with MacKay's method using fewer resources.<|reference_end|> | arxiv | @article{peto1994a,
title={A Comparison of Two Smoothing Methods for Word Bigram Models},
author={Linda Bauman Peto (University of Toronto)},
journal={arXiv preprint arXiv:cmp-lg/9410034},
year={1994},
number={CSRI-304},
archivePrefix={arXiv},
eprint={cmp-lg/9410034},
primaryClass={cmp-lg cs.CL}
} | peto1994a |
arxiv-668469 | cmp-lg/9411001 | Sublanguage Terms: Dictionaries, Usage, and Automatic Classification | <|reference_start|>Sublanguage Terms: Dictionaries, Usage, and Automatic Classification: The use of terms from natural and social scientific titles and abstracts is studied from the perspective of sublanguages and their specialized dictionaries. Different notions of sublanguage distinctiveness are explored. Objective methods for separating hard and soft sciences are suggested based on measures of sublanguage use, dictionary characteristics, and sublanguage distinctiveness. Abstracts were automatically classified with a high degree of accuracy by using a formula that considers the degree of uniqueness of terms in each sublanguage. This may prove useful for text filtering or information retrieval systems.<|reference_end|> | arxiv | @article{losee1994sublanguage,
title={Sublanguage Terms: Dictionaries, Usage, and Automatic Classification},
author={Robert M. Losee and Stephanie W. Haas (School of Information and
Library Science, U. of North Carolina, Chapel Hill)},
journal={arXiv preprint arXiv:cmp-lg/9411001},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411001},
primaryClass={cmp-lg cs.CL}
} | losee1994sublanguage |
arxiv-668470 | cmp-lg/9411002 | CLARE: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine | <|reference_start|>CLARE: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine: This report describes the research, design and implementation work carried out in building the CLARE system at SRI International, Cambridge, England. CLARE was designed as a natural language processing system with facilities for reasoning and understanding in context and for generating cooperative responses. The project involved both further development of SRI's Core Language Engine (Alshawi, 1992, MIT Press) natural language processor and the design and implementation of new components for reasoning and response generation. The CLARE system has advanced the state of the art in a wide variety of areas, both through the use of novel techniques developed on the project, and by extending the coverage or scale of known techniques. The language components are application-independent and provide interfaces for the development of new types of application.<|reference_end|> | arxiv | @article{alshawi1994clare:,
title={CLARE: A Contextual Reasoning and Cooperative Response Framework for the
Core Language Engine},
author={Hiyan Alshawi, David Carter, Richard Crouch, Steve Pulman, Manny
Rayner and Arnold Smith},
journal={arXiv preprint arXiv:cmp-lg/9411002},
year={1994},
number={CRC-028},
archivePrefix={arXiv},
eprint={cmp-lg/9411002},
primaryClass={cmp-lg cs.CL}
} | alshawi1994clare: |
arxiv-668471 | cmp-lg/9411003 | Adnominal adjectives, code-switching and lexicalized TAG | <|reference_start|>Adnominal adjectives, code-switching and lexicalized TAG: In codeswitching contexts, the language of a syntactic head determines the distribution of its complements. Mahootian 1993 derives this generalization by representing heads as the anchors of elementary trees in a lexicalized TAG. However, not all codeswitching sequences are amenable to a head-complement analysis. For instance, adnominal adjectives can occupy positions not available to them in their own language, and the TAG derivation of such sequences must use unanchored auxiliary trees. palabras heavy-duty `heavy-duty words' (Spanish-English; Poplack 1980:584) taste lousy sana `very lousy taste' (English-Swahili; Myers-Scotton 1993:29, (10)) Given the null hypothesis that codeswitching and monolingual sequences are derived in an identical manner, sequences like those above provide evidence that pure lexicalized TAGs are inadequate for the description of natural language.<|reference_end|> | arxiv | @article{mahootian1994adnominal,
title={Adnominal adjectives, code-switching and lexicalized TAG},
author={Shahrzad Mahootian and Beatrice Santorini},
journal={arXiv preprint arXiv:cmp-lg/9411003},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411003},
primaryClass={cmp-lg cs.CL}
} | mahootian1994adnominal |
arxiv-668472 | cmp-lg/9411004 | Determining Determiner Sequencing: A Syntactic Analysis for English | <|reference_start|>Determining Determiner Sequencing: A Syntactic Analysis for English: Previous work on English determiners has primarily concentrated on their semantics or scoping properties rather than their complex ordering behavior. The little work that has been done on determiner ordering generally splits determiners into three subcategories. However, this small number of categories does not capture the finer distinctions necessary to correctly order determiners. This paper presents a syntactic account of determiner sequencing based on eight independently identified semantic features. Complex determiners, such as genitives, partitives, and determiner modifying adverbials, are also presented. This work has been implemented as part of XTAG, a wide-coverage grammar for English based in the Feature-Based, Lexicalized Tree Adjoining Grammar (FB-LTAG) formalism.<|reference_end|> | arxiv | @article{hockey1994determining,
title={Determining Determiner Sequencing: A Syntactic Analysis for English},
author={Beth Ann Hockey, Dania Egedi (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9411004},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411004},
primaryClass={cmp-lg cs.CL}
} | hockey1994determining |
arxiv-668473 | cmp-lg/9411005 | Constraining Lexical Selection Across Languages Using TAGs | <|reference_start|>Constraining Lexical Selection Across Languages Using TAGs: Lexical selection in Machine Translation consists of several related components. Two that have received a lot of attention are lexical mapping from an underlying concept or lexical item, and choosing the correct subcategorization frame based on argument structure. Because most MT applications are small or relatively domain specific, a third component of lexical selection is generally overlooked - distinguishing between lexical items that are closely related conceptually. While some MT systems have proposed using a 'world knowledge' module to decide which word is more appropriate based on various pragmatic or stylistic constraints, we are interested in seeing how much we can accomplish using a combination of syntax and lexical semantics. By using separate ontologies for each language implemented in FB-LTAGs, we are able to elegantly model the more specific and language dependent syntactic and semantic distinctions necessary to further filter the choice of the lexical item.<|reference_end|> | arxiv | @article{egedi1994constraining,
title={Constraining Lexical Selection Across Languages Using TAGs},
author={Dania Egedi, Martha Palmer (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9411005},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411005},
primaryClass={cmp-lg cs.CL}
} | egedi1994constraining |
arxiv-668474 | cmp-lg/9411006 | Status of the XTAG System | <|reference_start|>Status of the XTAG System: XTAG is an ongoing project to develop a wide-coverage grammar for English, based on the Feature-based Lexicalized Tree Adjoining Grammar (FB-LTAG) formalism. The XTAG system integrates a morphological analyzer, an N-best part-of-speech tagger, an Early-style parser and an X-window interface, along with a wide-coverage grammar for English developed using the system. This system serves as a linguist's workbench for developing FB-LTAG specifications. This paper presents a description of and recent improvements to the various components of the XTAG system. It also presents the recent performance of the wide-coverage grammar on various corpora and compares it against the performance of other wide-coverage and domain-specific grammars.<|reference_end|> | arxiv | @article{doran1994status,
title={Status of the XTAG System},
author={Christy Doran, Dania Egedi, Beth Ann Hockey, B. Srinivas (University
of Pennsylvania)},
journal={Proceedings of TAG+3, 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411006},
primaryClass={cmp-lg cs.CL}
} | doran1994status |
arxiv-668475 | cmp-lg/9411007 | The Linguistic Relevance of Quasi-Trees | <|reference_start|>The Linguistic Relevance of Quasi-Trees: We discuss two constructions (long scrambling and ECM verbs) which challenge most syntactic theories (including traditional TAG approaches) since they seem to require exceptional mechanisms and postulates. We argue that these constructions should in fact be analyzed in a similar manner, namely as involving a verb which selects for a ``defective'' complement. These complements are defective in that they lack certain Case-assigning abilities (represented as functional heads). The constructions differ in how many such abilities are lacking. Following the previous analysis of scrambling of Rambow (1994), we propose a TAG analysis based on quasi-trees.<|reference_end|> | arxiv | @article{kroch1994the,
title={The Linguistic Relevance of Quasi-Trees},
author={Anthony Kroch (U. Penn.) and Owen Rambow (Paris 7)},
journal={In {\em 3e Colloque International sur les Grammaires d'Arbres
Adjoints (TAG+3)}},
year={1994},
number={Report TALANA-RT-94-01, TALANA, Universit{\'e} Paris 7, 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411007},
primaryClass={cmp-lg cs.CL}
} | kroch1994the |
arxiv-668476 | cmp-lg/9411008 | Parsing Free Word-Order Languages in Polynomial Time | <|reference_start|>Parsing Free Word-Order Languages in Polynomial Time: We present a parsing algorithm with polynomial time complexity for a large subset of V-TAG languages. V-TAG, a variant of multi-component TAG, can handle free-word order phenomena which are beyond the class LCFRS (which includes regular TAG). Our algorithm is based on a CYK-style parser for TAGs.<|reference_end|> | arxiv | @article{becker1994parsing,
title={Parsing Free Word-Order Languages in Polynomial Time},
author={Tilman Becker (U. Penn.) and Owen Rambow (Paris 7)},
journal={In {\em 3e Colloque International sur les Grammaires d'Arbres
Adjoints (TAG+3)}},
year={1994},
number={TALANA-RT-94-01, TALANA, Universite' Paris 7, 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411008},
primaryClass={cmp-lg cs.CL}
} | becker1994parsing |
arxiv-668477 | cmp-lg/9411009 | Bootstrapping A Wide-Coverage CCG from FB-LTAG | <|reference_start|>Bootstrapping A Wide-Coverage CCG from FB-LTAG: A number of researchers have noted the similarities between LTAGs and CCGs. Observing this resemblance, we felt that we could make use of the wide-coverage grammar developed in the XTAG project to build a wide-coverage CCG. To our knowledge there have been no attempts to construct a large-scale CCG parser with the lexicon to support it. In this paper, we describe such a system, built by adapting various XTAG components to CCG. We find that, despite the similarities between the formalisms, certain parts of the grammatical workload are distributed differently. In addition, the flexibility of CCG derivations allows the translated grammar to handle a number of ``non-constituent'' constructions which the XTAG grammar cannot.<|reference_end|> | arxiv | @article{doran1994bootstrapping,
title={Bootstrapping A Wide-Coverage CCG from FB-LTAG},
author={Christine Doran and B. Srinivas (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9411009},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411009},
primaryClass={cmp-lg cs.CL}
} | doran1994bootstrapping |
arxiv-668478 | cmp-lg/9411010 | The "Whiteboard" Architecture: a way to integrate heterogeneous components of NLP systems | <|reference_start|>The "Whiteboard" Architecture: a way to integrate heterogeneous components of NLP systems: We present a new software architecture for NLP systems made of heterogeneous components, and demonstrate an architectural prototype we have built at ATR in the context of Speech Translation.<|reference_end|> | arxiv | @article{boitet1994the,
title={The "Whiteboard" Architecture: a way to integrate heterogeneous
components of NLP systems},
author={Christian Boitet (GETA, IMAG (UJF & CNRS)) & Mark Seligman (ATR
Interpreting Telecommunications Research Labs)},
journal={COLING-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411010},
primaryClass={cmp-lg cs.CL}
} | boitet1994the |
arxiv-668479 | cmp-lg/9411011 | Acquiring Knowledge from Encyclopedic Texts | <|reference_start|>Acquiring Knowledge from Encyclopedic Texts: A computational model for the acquisition of knowledge from encyclopedic texts is described. The model has been implemented in a program, called SNOWY, that reads unedited texts from {\em The World Book Encyclopedia}, and acquires new concepts and conceptual relations about topics dealing with the dietary habits of animals, their classifications and habitats. The program is also able to answer an ample set of questions about the knowledge that it has acquired. This paper describes the essential components of this model, namely semantic interpretation, inferences and representation, and ends with an evaluation of the performance of the program, a sample of the questions that it is able to answer, and its relation to other programs of similar nature.<|reference_end|> | arxiv | @article{gomez1994acquiring,
title={Acquiring Knowledge from Encyclopedic Texts},
author={Fernando Gomez (UCF), Richard Hull (UCF), Carlos Segami (Barry
University)},
journal={Proceedings of the Fourth ACL Conference on Applied Natural
Language Processing, Stuttgart, Germany, October 13-15, 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411011},
primaryClass={cmp-lg cs.CL}
} | gomez1994acquiring |
arxiv-668480 | cmp-lg/9411012 | From Regular to Context Free to Mildly Context Sensitive Tree Rewriting Systems: The Path of Child Language Acquisition | <|reference_start|>From Regular to Context Free to Mildly Context Sensitive Tree Rewriting Systems: The Path of Child Language Acquisition: Current syntactic theory limits the range of grammatical variation so severely that the logical problem of grammar learning is trivial. Yet, children exhibit characteristic stages in syntactic development at least through their sixth year. Rather than positing maturational delays, I suggest that acquisition difficulties are the result of limitations in manipulating grammatical representations. I argue that the genesis of complex sentences reflects increasing generative capacity in the systems generating structural descriptions: conjoined clauses demand only a regular tree rewriting system; sentential embedding uses a context-free tree substitution grammar; modification requires TAG, a mildly context-sensitive system.<|reference_end|> | arxiv | @article{frank1994from,
title={From Regular to Context Free to Mildly Context Sensitive Tree Rewriting
Systems: The Path of Child Language Acquisition},
author={Robert Frank (University of Delaware)},
journal={Appeared in {\em 3e Colloque International sur les grammaires
d'Arbres Adjoints (TAG+3).}},
year={1994},
number={TALANA-RT-94-01, TALANA, Universit\'{e} Paris 7, 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411012},
primaryClass={cmp-lg cs.CL}
} | frank1994from |
arxiv-668481 | cmp-lg/9411013 | Phoneme-level speech and natural language intergration for agglutinative languages | <|reference_start|>Phoneme-level speech and natural language intergration for agglutinative languages: A new tightly coupled speech and natural language integration model is presented for a TDNN-based large vocabulary continuous speech recognition system. Unlike the popular n-best techniques developed for integrating mainly HMM-based speech and natural language systems in word level, which is obviously inadequate for the morphologically complex agglutinative languages, our model constructs a spoken language system based on the phoneme-level integration. The TDNN-CYK spoken language architecture is designed and implemented using the TDNN-based diphone recognition module integrated with the table-driven phonological/morphological co-analysis. Our integration model provides a seamless integration of speech and natural language for connectionist speech recognition systems especially for morphologically complex languages such as Korean. Our experiment results show that the speaker-dependent continuous Eojeol (word) recognition can be integrated with the morphological analysis with over 80\% morphological analysis success rate directly from the speech input for the middle-level vocabularies.<|reference_end|> | arxiv | @article{kim1994phoneme-level,
title={Phoneme-level speech and natural language intergration for agglutinative
languages},
author={Geunbae Lee Jong-Hyeok Lee Kyunghee Kim (Department of Computer
Science & Engineering and Postech Information Research Laboratory Pohang
University of Science & Technology, Korea)},
journal={arXiv preprint arXiv:cmp-lg/9411013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411013},
primaryClass={cmp-lg cs.CL}
} | kim1994phoneme-level |
arxiv-668482 | cmp-lg/9411014 | Automatically Identifying Morphological Relations in = Machine-Readable Dictionaries | <|reference_start|>Automatically Identifying Morphological Relations in = Machine-Readable Dictionaries: We describe an automated method for identifying classes of morphologically related words in an on-line dictionary, and for linking individual senses in the derived form to one or more senses in the base form by means of morphological relation attributes. We also present an algorithm for computing a score reflecting the system=92s certainty in these derivational links; this computation relies on the content of semantic relations associated with each sense, which are extracted automatically by parsing each sense definition and subjecting the parse structure to automated semantic analysis. By processing the entire set of headwords in the dictionary in this fashion we create a large set of directed derivational graphs, which can then be accessed by other components in our broad-coverage NLP system. Spurious or unlikely derivations are not discarded, but are rather added to the dictionary and assigned a negative score; this allows the system to handle non-standard uses of these forms.<|reference_end|> | arxiv | @article{pentheroudakis1994automatically,
title={Automatically Identifying Morphological Relations in = Machine-Readable
Dictionaries},
author={Joseph Pentheroudakis and Lucy Vanderwende, Microsoft Corporation},
journal={arXiv preprint arXiv:cmp-lg/9411014},
year={1994},
number={MSR-TR-93-06},
archivePrefix={arXiv},
eprint={cmp-lg/9411014},
primaryClass={cmp-lg cs.CL}
} | pentheroudakis1994automatically |
arxiv-668483 | cmp-lg/9411015 | Parsing Using Linearly Ordered Phonological Rules | <|reference_start|>Parsing Using Linearly Ordered Phonological Rules: A generate and test algorithm is described which parses a surface form into one or more lexical entries using linearly ordered phonological rules. This algorithm avoids the exponential expansion of search space which a naive parsing algorithm would face by encoding into the form being parsed the ambiguities which arise during parsing. The algorithm has been implemented and tested on real language data, and its speed compares favorably with that of a KIMMO-type parser.<|reference_end|> | arxiv | @article{maxwell1994parsing,
title={Parsing Using Linearly Ordered Phonological Rules},
author={Michael Maxwell (Summer Institute of Linguistics)},
journal={arXiv preprint arXiv:cmp-lg/9411015},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411015},
primaryClass={cmp-lg cs.CL}
} | maxwell1994parsing |
arxiv-668484 | cmp-lg/9411016 | Extending DRT with a Focusing Mechanism for Pronominal Anaphora and Ellipsis Resolution | <|reference_start|>Extending DRT with a Focusing Mechanism for Pronominal Anaphora and Ellipsis Resolution: Cormack (1992) proposed a framework for pronominal anaphora resolution. Her proposal integrates focusing theory (Sidner et al.) and DRT (Kamp and Reyle). We analyzed this methodology and adjusted it to the processing of Portuguese texts. The scope of the framework was widened to cover sentences containing restrictive relative clauses and subject ellipsis. Tests were conceived and applied to probe the adequacy of proposed modifications when dealing with processing of current texts.<|reference_end|> | arxiv | @article{abracos1994extending,
title={Extending DRT with a Focusing Mechanism for Pronominal Anaphora and
Ellipsis Resolution},
author={Jose Abracos, Jose Gabriel Lopes},
journal={arXiv preprint arXiv:cmp-lg/9411016},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411016},
primaryClass={cmp-lg cs.CL}
} | abracos1994extending |
arxiv-668485 | cmp-lg/9411017 | Comlex Syntax: Building a Computational Lexicon | <|reference_start|>Comlex Syntax: Building a Computational Lexicon: We describe the design of Comlex Syntax, a computational lexicon providing detailed syntactic information for approximately 38,000 English headwords. We consider the types of errors which arise in creating such a lexicon, and how such errors can be measured and controlled.<|reference_end|> | arxiv | @article{grishman1994comlex,
title={Comlex Syntax: Building a Computational Lexicon},
author={Ralph Grishman, Catherine Macleod, and Adam Meyers (Computer Science
Department, New York University)},
journal={arXiv preprint arXiv:cmp-lg/9411017},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411017},
primaryClass={cmp-lg cs.CL}
} | grishman1994comlex |
arxiv-668486 | cmp-lg/9411018 | Interlanguage Signs and Lexical Transfer Errors | <|reference_start|>Interlanguage Signs and Lexical Transfer Errors: A theory of interlanguage (IL) lexicons is outlined, with emphasis on IL lexical entries, based on the HPSG notion of lexical sign. This theory accounts for idiosyncratic or lexical transfer of syntactic subcategorisation and idioms from the first language to the IL. It also accounts for developmental stages in IL lexical grammar, and grammatical variation in the use of the same lexical item. The theory offers a tool for robust parsing of lexical transfer errors and diagnosis of such errors.<|reference_end|> | arxiv | @article{ro1994interlanguage,
title={Interlanguage Signs and Lexical Transfer Errors},
author={Atle Ro (Department of Phonetics and Linguistics, University of
Bergen)},
journal={arXiv preprint arXiv:cmp-lg/9411018},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411018},
primaryClass={cmp-lg cs.CL}
} | ro1994interlanguage |
arxiv-668487 | cmp-lg/9411019 | Focus on ``only" and ``Not" | <|reference_start|>Focus on ``only" and ``Not": Krifka [1993] has suggested that focus should be seen as a means of providing material for a range of semantic and pragmatic functions to work on, rather than as a specific semantic or pragmatic function itself. The current paper describes an implementation of this general idea, and applies it to the interpretation of {\em only} and {\em not}.<|reference_end|> | arxiv | @article{ramsay1994focus,
title={Focus on ``only" and ``Not"},
author={Allan Ramsay (Department of Computer Science, Univ. College Dublin)},
journal={COLING-94, 881-885},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411019},
primaryClass={cmp-lg cs.CL}
} | ramsay1994focus |
arxiv-668488 | cmp-lg/9411020 | Extraction in Dutch with Lexical Rules | <|reference_start|>Extraction in Dutch with Lexical Rules: Unbounded dependencies are often modelled by ``traces'' (and ``gap threading'') in unification-based grammars. Pollard and Sag, however, suggest an analysis of extraction based on lexical rules, which excludes the notion of traces (P&S 1994, Chapter 9). In parsing, it suggests a trade of indeterminism for lexical ambiguity. This paper provides a short introduction to this approach to extraction with lexical rules, and illustrates the linguistic power of the approach by applying it to particularly idiosyncratic Dutch extraction data.<|reference_end|> | arxiv | @article{rentier1994extraction,
title={Extraction in Dutch with Lexical Rules},
author={Gerrit Rentier (Tilburg University)},
journal={arXiv preprint arXiv:cmp-lg/9411020},
year={1994},
number={ITK Research Report-no 53},
archivePrefix={arXiv},
eprint={cmp-lg/9411020},
primaryClass={cmp-lg cs.CL}
} | rentier1994extraction |
arxiv-668489 | cmp-lg/9411021 | Free-ordered CUG on Chemical Abstract Machine | <|reference_start|>Free-ordered CUG on Chemical Abstract Machine: We propose a paradigm for concurrent natural language generation. In order to represent grammar rules distributively, we adopt categorial unification grammar (CUG) where each category owns its functional type. We augment typed lambda calculus with several new combinators, to make the order of lambda-conversions free for partial / local processing. The concurrent calculus is modeled with Chemical Abstract Machine. We show an example of a Japanese causative auxiliary verb that requires a drastic rearrangement of case domination.<|reference_end|> | arxiv | @article{tojo1994free-ordered,
title={Free-ordered CUG on Chemical Abstract Machine},
author={Satoshi Tojo (Mitsubishi Research Institute, Inc)},
journal={arXiv preprint arXiv:cmp-lg/9411021},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411021},
primaryClass={cmp-lg cs.CL}
} | tojo1994free-ordered |
arxiv-668490 | cmp-lg/9411022 | Adaptive Sentence Boundary Disambiguation | <|reference_start|>Adaptive Sentence Boundary Disambiguation: Labeling of sentence boundaries is a necessary prerequisite for many natural language processing tasks, including part-of-speech tagging and sentence alignment. End-of-sentence punctuation marks are ambiguous; to disambiguate them most systems use brittle, special-purpose regular expression grammars and exception rules. As an alternative, we have developed an efficient, trainable algorithm that uses a lexicon with part-of-speech probabilities and a feed-forward neural network. After training for less than one minute, the method correctly labels over 98.5\% of sentence boundaries in a corpus of over 27,000 sentence-boundary marks. We show the method to be efficient and easily adaptable to different text genres, including single-case texts.<|reference_end|> | arxiv | @article{palmer1994adaptive,
title={Adaptive Sentence Boundary Disambiguation},
author={David D. Palmer and Marti A. Hearst},
journal={Proceedings of ANLP 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411022},
primaryClass={cmp-lg cs.CL}
} | palmer1994adaptive |
arxiv-668491 | cmp-lg/9411023 | Abstract Generation based on Rhetorical Structure Extraction | <|reference_start|>Abstract Generation based on Rhetorical Structure Extraction: We have developed an automatic abstract generation system for Japanese expository writings based on rhetorical structure extraction. The system first extracts the rhetorical structure, the compound of the rhetorical relations between sentences, and then cuts out less important parts in the extracted structure to generate an abstract of the desired length. Evaluation of the generated abstract showed that it contains at maximum 74\% of the most important sentences of the original text. The system is now utilized as a text browser for a prototypical interactive document retrieval system.<|reference_end|> | arxiv | @article{ono1994abstract,
title={Abstract Generation based on Rhetorical Structure Extraction},
author={Kenji Ono, Kazuo Sumita, Seiji Miike Research and Development Center,
Toshiba Corporation Komukai-Toshiba-cho 1, Saiwai-ku, Kawasaki, 210, Japan},
journal={arXiv preprint arXiv:cmp-lg/9411023},
year={1994},
number={COLING-94, pp.344 - 348},
archivePrefix={arXiv},
eprint={cmp-lg/9411023},
primaryClass={cmp-lg cs.CL}
} | ono1994abstract |
arxiv-668492 | cmp-lg/9411024 | Reverse Queries in DATR | <|reference_start|>Reverse Queries in DATR: DATR is a declarative representation language for lexical information and as such, in principle, neutral with respect to particular processing strategies. Previous DATR compiler/interpreter systems support only one access strategy that closely resembles the set of inference rules of the procedural semantics of DATR (Evans & Gazdar 1989a). In this paper we present an alternative access strategy (reverse query strategy) for a non-trivial subset of DATR.<|reference_end|> | arxiv | @article{langer1994reverse,
title={Reverse Queries in DATR},
author={Hagen Langer (University of Osnabrueck, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9411024},
year={1994},
number={Proceedings of COLING-94, Vol. 2, pp. 1089-1095},
archivePrefix={arXiv},
eprint={cmp-lg/9411024},
primaryClass={cmp-lg cs.CL}
} | langer1994reverse |
arxiv-668493 | cmp-lg/9411025 | Multi-Dimensional Inheritance | <|reference_start|>Multi-Dimensional Inheritance: In this paper, we present an alternative approach to multiple inheritance for typed feature structures. In our approach, a feature structure can be associated with several types coming from different hierarchies (dimensions). In case of multiple inheritance, a type has supertypes from different hierarchies. We contrast this approach with approaches based on a single type hierarchy where a feature structure has only one unique most general type, and multiple inheritance involves computation of greatest lower bounds in the hierarchy. The proposed approach supports current linguistic analyses in constraint-based formalisms like HPSG, inheritance in the lexicon, and knowledge representation for NLP systems. Finally, we show that multi-dimensional inheritance hierarchies can be compiled into a Prolog term representation, which allows to compute the conjunction of two types efficiently by Prolog term unification.<|reference_end|> | arxiv | @article{erbach1994multi-dimensional,
title={Multi-Dimensional Inheritance},
author={Gregor Erbach (University of the Saarland, Computational Linguistics
Dept.)},
journal={Proceedings of KONVENS 94 (ed. H. Trost), Vienna, pp. 102-111},
year={1994},
number={CLAUS Report 40},
archivePrefix={arXiv},
eprint={cmp-lg/9411025},
primaryClass={cmp-lg cs.CL}
} | erbach1994multi-dimensional |
arxiv-668494 | cmp-lg/9411026 | Manipulating Human-oriented Dictionaries with very simple tools | <|reference_start|>Manipulating Human-oriented Dictionaries with very simple tools: This paper presents a methodology for building and manipulating human-oriented dictionaries. This methodology has been applied in the construction of a French-English-Malay dictionary which has been obtained by "crossing" semi-automatically two bilingual dictionaries. We use only Microsoft Word, a specialized language for writing transcriptors and a small but powerful dictionary tool.<|reference_end|> | arxiv | @article{gaschler1994manipulating,
title={Manipulating Human-oriented Dictionaries with very simple tools},
author={Jean Gaschler and Mathieu Lafourcade (GETA, IMAG (UJF & CNRS))},
journal={arXiv preprint arXiv:cmp-lg/9411026},
year={1994},
number={COLING-94},
archivePrefix={arXiv},
eprint={cmp-lg/9411026},
primaryClass={cmp-lg cs.CL}
} | gaschler1994manipulating |
arxiv-668495 | cmp-lg/9411027 | Classifier Assignment by Corpus-based Approach | <|reference_start|>Classifier Assignment by Corpus-based Approach: This paper presents an algorithm for selecting an appropriate classifier word for a noun. In Thai language, it frequently happens that there is fluctuation in the choice of classifier for a given concrete noun, both from the point of view of the whole spe ech community and individual speakers. Basically, there is no exect rule for classifier selection. As far as we can do in the rule-based approach is to give a default rule to pick up a corresponding classifier of each noun. Registration of classifier for each noun is limited to the type of unit classifier because other types are open due to the meaning of representation. We propose a corpus-based method (Biber, 1993; Nagao, 1993; Smadja, 1993) which generates Noun Classifier Associations (NCA) to overcome the problems in classifier assignment and semantic construction of noun phrase. The NCA is created statistically from a large corpus and recomposed under concept hierarchy constraints and frequency of occurrences.<|reference_end|> | arxiv | @article{sornlertlamvanich1994classifier,
title={Classifier Assignment by Corpus-based Approach},
author={Virach Sornlertlamvanich, Wantanee Pantachat, Surapant Meknavin},
journal={arXiv preprint arXiv:cmp-lg/9411027},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411027},
primaryClass={cmp-lg cs.CL}
} | sornlertlamvanich1994classifier |
arxiv-668496 | cmp-lg/9411028 | The Speech-Language Interface in the Spoken Language Translator | <|reference_start|>The Speech-Language Interface in the Spoken Language Translator: The Spoken Language Translator is a prototype for practically useful systems capable of translating continuous spoken language within restricted domains. The prototype system translates air travel (ATIS) queries from spoken English to spoken Swedish and to French. It is constructed, with as few modifications as possible, from existing pieces of speech and language processing software. The speech recognizer and language understander are connected by a fairly conventional pipelined N-best interface. This paper focuses on the ways in which the language processor makes intelligent use of the sentence hypotheses delivered by the recognizer. These ways include (1) producing modified hypotheses to reflect the possible presence of repairs in the uttered word sequence; (2) fast parsing with a version of the grammar automatically specialized to the more frequent constructions in the training corpus; and (3) allowing syntactic and semantic factors to interact with acoustic ones in the choice of a meaning structure for translation, so that the acoustically preferred hypothesis is not always selected even if it is within linguistic coverage.<|reference_end|> | arxiv | @article{carter1994the,
title={The Speech-Language Interface in the Spoken Language Translator},
author={David Carter and Manny Rayner (SRI International, Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9411028},
year={1994},
number={CRC-051},
archivePrefix={arXiv},
eprint={cmp-lg/9411028},
primaryClass={cmp-lg cs.CL}
} | carter1994the |
arxiv-668497 | cmp-lg/9411029 | An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities | <|reference_start|>An Efficient Probabilistic Context-Free Parsing Algorithm that Computes Prefix Probabilities: We describe an extension of Earley's parser for stochastic context-free grammars that computes the following quantities given a stochastic context-free grammar and an input string: a) probabilities of successive prefixes being generated by the grammar; b) probabilities of substrings being generated by the nonterminals, including the entire string being generated by the grammar; c) most likely (Viterbi) parse of the string; d) posterior expected number of applications of each grammar production, as required for reestimating rule probabilities. (a) and (b) are computed incrementally in a single left-to-right pass over the input. Our algorithm compares favorably to standard bottom-up parsing methods for SCFGs in that it works efficiently on sparse grammars by making use of Earley's top-down control structure. It can process any context-free rule format without conversion to some normal form, and combines computations for (a) through (d) in a single algorithm. Finally, the algorithm has simple extensions for processing partially bracketed inputs, and for finding partial parses and their likelihoods on ungrammatical inputs.<|reference_end|> | arxiv | @article{stolcke1994an,
title={An Efficient Probabilistic Context-Free Parsing Algorithm that Computes
Prefix Probabilities},
author={Andreas Stolcke (SRI International, Menlo Park, CA 94025)},
journal={arXiv preprint arXiv:cmp-lg/9411029},
year={1994},
number={ICSI TR-93-065 (Revised 11/94)},
archivePrefix={arXiv},
eprint={cmp-lg/9411029},
primaryClass={cmp-lg cs.CL}
} | stolcke1994an |
arxiv-668498 | cmp-lg/9411030 | Complexity of Scrambling: A New Twist to the Competence - Performance Distinction | <|reference_start|>Complexity of Scrambling: A New Twist to the Competence - Performance Distinction: In this paper we discuss the following issue: How do we decide whether a certain property of language is a competence property or a performance property? Our claim is that the answer to this question is not given a-priori. The answer depends on the formal devices (formal grammars and machines) available to us for describing language. We discuss this issue in the context of the complexity of processing of center embedding (of relative clauses in English) and scrambling (in German, for example) from arbitrary depths of embedding.<|reference_end|> | arxiv | @article{joshi1994complexity,
title={Complexity of Scrambling: A New Twist to the Competence - Performance
Distinction},
author={Aravind K Joshi (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9411030},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411030},
primaryClass={cmp-lg cs.CL}
} | joshi1994complexity |
arxiv-668499 | cmp-lg/9411031 | Automatic Generation of Technical Documentation | <|reference_start|>Automatic Generation of Technical Documentation: Natural-language generation (NLG) techniques can be used to automatically produce technical documentation from a domain knowledge base and linguistic and contextual models. We discuss this application of NLG technology from both a technical and a usefulness (costs and benefits) perspective. This discussion is based largely on our experiences with the IDAS documentation-generation project, and the reactions various interested people from industry have had to IDAS. We hope that this summary of our experiences with IDAS and the lessons we have learned from it will be beneficial for other researchers who wish to build technical-documentation generation systems.<|reference_end|> | arxiv | @article{reiter1994automatic,
title={Automatic Generation of Technical Documentation},
author={Ehud Reiter (CoGenTex, Ithaca, USA), Chris Mellish (University of
Edinburgh, UK), and John Levine (University of Edinburgh, UK)},
journal={arXiv preprint arXiv:cmp-lg/9411031},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411031},
primaryClass={cmp-lg cs.CL}
} | reiter1994automatic |
arxiv-668500 | cmp-lg/9411032 | Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguistically Plausible? | <|reference_start|>Has a Consensus NL Generation Architecture Appeared, and is it Psycholinguistically Plausible?: I survey some recent applications-oriented NL generation systems, and claim that despite very different theoretical backgrounds, these systems have a remarkably similar architecture in terms of the modules they divide the generation process into, the computations these modules perform, and the way the modules interact with each other. I also compare this `consensus architecture' among applied NLG systems with psycholinguistic knowledge about how humans speak, and argue that at least some aspects of the consensus architecture seem to be in agreement with what is known about human language production, despite the fact that psycholinguistic plausibility was not in general a goal of the developers of the surveyed systems.<|reference_end|> | arxiv | @article{reiter1994has,
title={Has a Consensus NL Generation Architecture Appeared, and is it
Psycholinguistically Plausible?},
author={Ehud Reiter (CoGenTex, Ithaca, USA)},
journal={arXiv preprint arXiv:cmp-lg/9411032},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9411032},
primaryClass={cmp-lg cs.CL}
} | reiter1994has |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.