corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-668701
cmp-lg/9508005
A Matching Technique in Example-Based Machine Translation
<|reference_start|>A Matching Technique in Example-Based Machine Translation: This paper addresses an important problem in Example-Based Machine Translation (EBMT), namely how to measure similarity between a sentence fragment and a set of stored examples. A new method is proposed that measures similarity according to both surface structure and content. A second contribution is the use of clustering to make retrieval of the best matching example from the database more efficient. Results on a large number of test cases from the CELEX database are presented.<|reference_end|>
arxiv
@article{cranias1995a, title={A Matching Technique in Example-Based Machine Translation}, author={Lambros Cranias, Harris Papageorgiou, Stelios Piperidis (Institute for Language and Speech Processing, Greece)}, journal={arXiv preprint arXiv:cmp-lg/9508005}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9508005}, primaryClass={cmp-lg cs.CL} }
cranias1995a
arxiv-668702
cmp-lg/9508006
Bi-Lexical Rules for Multi-Lexeme Translation in Lexicalist MT
<|reference_start|>Bi-Lexical Rules for Multi-Lexeme Translation in Lexicalist MT: The paper presents a prototype lexicalist Machine Translation system (based on the so-called `Shake-and-Bake' approach of Whitelock (1992) consisting of an analysis component, a dynamic bilingual lexicon, and a generation component, and shows how it is applied to a range of MT problems. Multi-Lexeme translations are handled through bi-lexical rules which map bilingual lexical signs into new bilingual lexical signs. It is argued that much translation can be handled by equating translationally equivalent lists of lexical signs, either directly in the bilingual lexicon, or by deriving them through bi-lexical rules. Lexical semantic information organized as Qualia structures (Pustejovsky 1991) is used as a mechanism for restricting the domain of the rules.<|reference_end|>
arxiv
@article{trujillo1995bi-lexical, title={Bi-Lexical Rules for Multi-Lexeme Translation in Lexicalist MT}, author={Arturo Trujillo (SCMS, The Robert Gordon University, Aberdeen)}, journal={arXiv preprint arXiv:cmp-lg/9508006}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9508006}, primaryClass={cmp-lg cs.CL} }
trujillo1995bi-lexical
arxiv-668703
cmp-lg/9508007
A Dynamic Approach to Rhythm in Language: Toward a Temporal Phonology
<|reference_start|>A Dynamic Approach to Rhythm in Language: Toward a Temporal Phonology: It is proposed that the theory of dynamical systems offers appropriate tools to model many phonological aspects of both speech production and perception. A dynamic account of speech rhythm is shown to be useful for description of both Japanese mora timing and English timing in a phrase repetition task. This orientation contrasts fundamentally with the more familiar symbolic approach to phonology, in which time is modeled only with sequentially arrayed symbols. It is proposed that an adaptive oscillator offers a useful model for perceptual entrainment (or `locking in') to the temporal patterns of speech production. This helps to explain why speech is often perceived to be more regular than experimental measurements seem to justify. Because dynamic models deal with real time, they also help us understand how languages can differ in their temporal detail---contributing to foreign accents, for example. The fact that languages differ greatly in their temporal detail suggests that these effects are not mere motor universals, but that dynamical models are intrinsic components of the phonological characterization of language.<|reference_end|>
arxiv
@article{port1995a, title={A Dynamic Approach to Rhythm in Language: Toward a Temporal Phonology}, author={Robert Port, Fred Cummins, Michael Gasser (Indiana University)}, journal={arXiv preprint arXiv:cmp-lg/9508007}, year={1995}, number={IU Cognitive Science TR 150}, archivePrefix={arXiv}, eprint={cmp-lg/9508007}, primaryClass={cmp-lg cs.CL} }
port1995a
arxiv-668704
cmp-lg/9508008
On Constraint-Based Lambek Calculi
<|reference_start|>On Constraint-Based Lambek Calculi: We explore the consequences of layering a Lambek proof system over an arbitrary (constraint) logic. A simple model-theoretic semantics for our hybrid language is provided for which a particularly simple combination of Lambek's and the proof system of the base logic is complete. Furthermore the proof system for the underlying base logic can be assumed to be a black box. The essential reasoning needed to be performed by the black box is that of {\em entailment checking}. Assuming feature logic as the base logic entailment checking amounts to a {\em subsumption} test which is a well-known quasi-linear time decidable problem.<|reference_end|>
arxiv
@article{doerre1995on, title={On Constraint-Based Lambek Calculi}, author={Jochen Doerre and Suresh Manandhar}, journal={arXiv preprint arXiv:cmp-lg/9508008}, year={1995}, number={Tech. Report HCRC/TR-69, University of Edinburgh}, archivePrefix={arXiv}, eprint={cmp-lg/9508008}, primaryClass={cmp-lg cs.CL} }
doerre1995on
arxiv-668705
cmp-lg/9508009
A Labelled Analytic Theorem Proving Environment for Categorial Grammar
<|reference_start|>A Labelled Analytic Theorem Proving Environment for Categorial Grammar: We present a system for the investigation of computational properties of categorial grammar parsing based on a labelled analytic tableaux theorem prover. This proof method allows us to take a modular approach, in which the basic grammar can be kept constant, while a range of categorial calculi can be captured by assigning different properties to the labelling algebra. The theorem proving strategy is particularly well suited to the treatment of categorial grammar, because it allows us to distribute the computational cost between the algorithm which deals with the grammatical types and the algebraic checker which constrains the derivation.<|reference_end|>
arxiv
@article{luz-filho1995a, title={A Labelled Analytic Theorem Proving Environment for Categorial Grammar}, author={Saturnino F. Luz-Filho and Patrick Sturt (Centre for Cognitive Science, University of Edinburgh)}, journal={To appear in the Proceedings of IWPT-95}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9508009}, primaryClass={cmp-lg cs.CL} }
luz-filho1995a
arxiv-668706
cmp-lg/9508010
Heuristics and Parse Ranking
<|reference_start|>Heuristics and Parse Ranking: There are currently two philosophies for building grammars and parsers -- Statistically induced grammars and Wide-coverage grammars. One way to combine the strengths of both approaches is to have a wide-coverage grammar with a heuristic component which is domain independent but whose contribution is tuned to particular domains. In this paper, we discuss a three-stage approach to disambiguation in the context of a lexicalized grammar, using a variety of domain independent heuristic techniques. We present a training algorithm which uses hand-bracketed treebank parses to set the weights of these heuristics. We compare the performance of our grammar against the performance of the IBM statistical grammar, using both untrained and trained weights for the heuristics.<|reference_end|>
arxiv
@article{srinivas1995heuristics, title={Heuristics and Parse Ranking}, author={B. Srinivas, Christine Doran and Seth Kulick, (University of Pennsylvania)}, journal={International Workshop on Parsing Technologies (IWPT 95)}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9508010}, primaryClass={cmp-lg cs.CL} }
srinivas1995heuristics
arxiv-668707
cmp-lg/9508011
The Use of Knowledge Preconditions in Language Processing
<|reference_start|>The Use of Knowledge Preconditions in Language Processing: If an agent does not possess the knowledge needed to perform an action, it may privately plan to obtain the required information on its own, or it may involve another agent in the planning process by engaging it in a dialogue. In this paper, we show how the requirements of knowledge preconditions can be used to account for information-seeking subdialogues in discourse. We first present an axiomatization of knowledge preconditions for the SharedPlan model of collaborative activity (Grosz & Kraus, 1993), and then provide an analysis of information-seeking subdialogues within a general framework for discourse processing. In this framework, SharedPlans and relationships among them are used to model the intentional component of Grosz and Sidner's (1986) theory of discourse structure.<|reference_end|>
arxiv
@article{lochbaum1995the, title={The Use of Knowledge Preconditions in Language Processing}, author={Karen E. Lochbaum (U S WEST Advanced Technologies)}, journal={Proceedings of IJCAI-95}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9508011}, primaryClass={cmp-lg cs.CL} }
lochbaum1995the
arxiv-668708
cmp-lg/9508012
A Natural Law of Succession
<|reference_start|>A Natural Law of Succession: Consider the problem of multinomial estimation. You are given an alphabet of k distinct symbols and are told that the i-th symbol occurred exactly n_i times in the past. On the basis of this information alone, you must now estimate the conditional probability that the next symbol will be i. In this report, we present a new solution to this fundamental problem in statistics and demonstrate that our solution outperforms standard approaches, both in theory and in practice.<|reference_end|>
arxiv
@article{ristad1995a, title={A Natural Law of Succession}, author={Eric Sven Ristad (Princeton University)}, journal={arXiv preprint arXiv:cmp-lg/9508012}, year={1995}, number={pu-495-95}, archivePrefix={arXiv}, eprint={cmp-lg/9508012}, primaryClass={cmp-lg cs.CL} }
ristad1995a
arxiv-668709
cmp-lg/9509001
How much is enough?: Data requirements for statistical NLP
<|reference_start|>How much is enough?: Data requirements for statistical NLP: In this paper I explore a number of issues in the analysis of data requirements for statistical NLP systems. A preliminary framework for viewing such systems is proposed and a sample of existing works are compared within this framework. The first steps toward a theory of data requirements are made by establishing some results relevant to bounding the expected error rate of a class of simplified statistical language learners as a function of the volume of training data.<|reference_end|>
arxiv
@article{lauer1995how, title={How much is enough?: Data requirements for statistical NLP}, author={Mark Lauer (Microsoft Institute, Sydney)}, journal={2nd Conference of the Pacific Association for Computational Linguistics, Brisbane, Australia}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9509001}, primaryClass={cmp-lg cs.CL} }
lauer1995how
arxiv-668710
cmp-lg/9509002
Conserving Fuel in Statistical Language Learning: Predicting Data Requirements
<|reference_start|>Conserving Fuel in Statistical Language Learning: Predicting Data Requirements: In this paper I address the practical concern of predicting how much training data is sufficient for a statistical language learning system. First, I briefly review earlier results and show how these can be combined to bound the expected accuracy of a mode-based learner as a function of the volume of training data. I then develop a more accurate estimate of the expected accuracy function under the assumption that inputs are uniformly distributed. Since this estimate is expensive to compute, I also give a close but cheaply computable approximation to it. Finally, I report on a series of simulations exploring the effects of inputs that are not uniformly distributed. Although these results are based on simplistic assumptions, they are a tentative step toward a useful theory of data requirements for SLL systems.<|reference_end|>
arxiv
@article{lauer1995conserving, title={Conserving Fuel in Statistical Language Learning: Predicting Data Requirements}, author={Mark Lauer (Microsoft Institute, Sydney)}, journal={Eighth Australian Joint Conference on Artificial Intelligence, Canberra, 1995.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9509002}, primaryClass={cmp-lg cs.CL} }
lauer1995conserving
arxiv-668711
cmp-lg/9509003
Cluster Expansions and Iterative Scaling for Maximum Entropy Language Models
<|reference_start|>Cluster Expansions and Iterative Scaling for Maximum Entropy Language Models: The maximum entropy method has recently been successfully introduced to a variety of natural language applications. In each of these applications, however, the power of the maximum entropy method is achieved at the cost of a considerable increase in computational requirements. In this paper we present a technique, closely related to the classical cluster expansion from statistical mechanics, for reducing the computational demands necessary to calculate conditional maximum entropy language models.<|reference_end|>
arxiv
@article{lafferty1995cluster, title={Cluster Expansions and Iterative Scaling for Maximum Entropy Language Models}, author={John D. Lafferty and Bernhard Suhm (Carnegie Mellon)}, journal={arXiv preprint arXiv:cmp-lg/9509003}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9509003}, primaryClass={cmp-lg cs.CL} }
lafferty1995cluster
arxiv-668712
cmp-lg/9509004
The Development and Migration of Concepts from Donor to Borrower Disciplines: Sublanguage Term Use in Hard & Soft Sciences
<|reference_start|>The Development and Migration of Concepts from Donor to Borrower Disciplines: Sublanguage Term Use in Hard & Soft Sciences: Academic disciplines, often divided into hard and soft sciences, may be understood as "donor disciplines" if they produce more concepts than they borrow from other disciplines, or "borrower disciplines" if they import more than they originate. Terms used to describe these concepts can be used to distinguish between hard and soft, donor and borrower, as well as individual discipline-specific sublanguages. Using term frequencies, the birth, growth, death, and migration of concepts and their associated terms are examined.<|reference_end|>
arxiv
@article{losee1995the, title={The Development and Migration of Concepts from Donor to Borrower Disciplines: Sublanguage Term Use in Hard & Soft Sciences}, author={Robert M. Losee (SILS, U. of North Carolina, Chapel Hill)}, journal={arXiv preprint arXiv:cmp-lg/9509004}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9509004}, primaryClass={cmp-lg cs.CL} }
losee1995the
arxiv-668713
cmp-lg/9509005
ParseTalk about Textual Ellipsis
<|reference_start|>ParseTalk about Textual Ellipsis: A hybrid methodology for the resolution of text-level ellipsis is presented in this paper. It incorporates conceptual proximity criteria applied to ontologically well-engineered domain knowledge bases and an approach to centering based on functional topic/comment patterns. We state text grammatical predicates for ellipsis and then turn to the procedural aspects of their evaluation within the framework of an actor-based implementation of a lexically distributed parser.<|reference_end|>
arxiv
@article{strube1995parsetalk, title={ParseTalk about Textual Ellipsis}, author={Michael Strube and Udo Hahn (Computational Linguistics Research Group, Freiburg University, Germany)}, journal={RANLP 95: Proc. of the Intl. Conf. on Recent Advances in Natural Language Processing. Tzigov Chark, Bulgaria, Sep. 14-16 1995, pp.62-72.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9509005}, primaryClass={cmp-lg cs.CL} }
strube1995parsetalk
arxiv-668714
cmp-lg/9510001
POS Tagging Using Relaxation Labelling
<|reference_start|>POS Tagging Using Relaxation Labelling: Relaxation labelling is an optimization technique used in many fields to solve constraint satisfaction problems. The algorithm finds a combination of values for a set of variables such that satisfies -to the maximum possible degree- a set of given constraints. This paper describes some experiments performed applying it to POS tagging, and the results obtained. It also ponders the possibility of applying it to word sense disambiguation.<|reference_end|>
arxiv
@article{padro1995pos, title={POS Tagging Using Relaxation Labelling}, author={Lluis Padro (Dept LSI, Universitat Politecnica de Catalunya)}, journal={arXiv preprint arXiv:cmp-lg/9510001}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510001}, primaryClass={cmp-lg cs.CL} }
padro1995pos
arxiv-668715
cmp-lg/9510002
Using Chinese Text Processing Technique for the Processing of Sanskrit Based Indian Languages: Maximum Resource Utilization and Maximum Compatibility
<|reference_start|>Using Chinese Text Processing Technique for the Processing of Sanskrit Based Indian Languages: Maximum Resource Utilization and Maximum Compatibility: Chinese text processing systems are using Double Byte Coding , while almost all existing Sanskrit Based Indian Languages have been using Single Byte coding for text processing. Through observation, Chinese Information Processing Technique has already achieved great technical development both in east and west. In contrast,Indian Languages are being processed by computer, more or less, for word processing purpose. This paper mainly emphasizes the method of processing Indian languages from a Computational Linguistic point of view. An overall design method is illustrated in this paper.This method concentrated on maximum resource utilization and compatibility: the ultimate goal is to have a Multiplatform Multilingual System. Keywords Text Procrssing, Multilingual Text Processing, Chinese Language Processing, Indian Language Processing, Character Coding.<|reference_end|>
arxiv
@article{hasan1995using, title={Using Chinese Text Processing Technique for the Processing of Sanskrit Based Indian Languages: Maximum Resource Utilization and Maximum Compatibility}, author={Md Maruf Hasan (Dept. of Information System & Computer Science National University of Singapore)}, journal={arXiv preprint arXiv:cmp-lg/9510002}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510002}, primaryClass={cmp-lg cs.CL} }
hasan1995using
arxiv-668716
cmp-lg/9510003
A Proposal for Word Sense Disambiguation using Conceptual Distance
<|reference_start|>A Proposal for Word Sense Disambiguation using Conceptual Distance: This paper presents a method for the resolution of lexical ambiguity and its automatic evaluation over the Brown Corpus. The method relies on the use of the wide-coverage noun taxonomy of WordNet and the notion of conceptual distance among concepts, captured by a Conceptual Density formula developed for this purpose. This fully automatic method requires no hand coding of lexical entries, hand tagging of text nor any kind of training process. The results of the experiment have been automatically evaluated against SemCor, the sense-tagged version of the Brown Corpus.<|reference_end|>
arxiv
@article{agirre1995a, title={A Proposal for Word Sense Disambiguation using Conceptual Distance}, author={Eneko Agirre (Euskal Herriko Unibertsitatea), German Rigau (Universitat Politecnica de Catalunya)}, journal={1st Intl. Conf. on recent Advances in NLP. Bulgaria. 1995.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510003}, primaryClass={cmp-lg cs.CL} }
agirre1995a
arxiv-668717
cmp-lg/9510004
Disambiguating bilingual nominal entries against WordNet
<|reference_start|>Disambiguating bilingual nominal entries against WordNet: This paper explores the acquisition of conceptual knowledge from bilingual dictionaries (French/English, Spanish/English and English/Spanish) using a pre-existing broad coverage Lexical Knowledge Base (LKB) WordNet. Bilingual nominal entries are disambiguated agains WordNet, therefore linking the bilingual dictionaries to WordNet yielding a multilingual LKB (MLKB). The resulting MLKB has the same structure as WordNet, but some nodes are attached additionally to disambiguated vocabulary of other languages. Two different, complementary approaches are explored. In one of the approaches each entry of the dictionary is taken in turn, exploiting the information in the entry itself. The inferential capability for disambiguating the translation is given by Semantic Density over WordNet. In the other approach, the bilingual dictionary was merged with WordNet, exploiting mainly synonymy relations. Each of the approaches was used in a different dictionary. Both approaches attain high levels of precision on their own, showing that disambiguating bilingual nominal entries, and therefore linking bilingual dictionaries to WordNet is a feasible task.<|reference_end|>
arxiv
@article{rigau1995disambiguating, title={Disambiguating bilingual nominal entries against WordNet}, author={German Rigau (Universitat Politecnica de Catalunya), Eneko Agirre (Euskal Herriko Unibertsitatea)}, journal={Workshop On The Computational Lexicon - ESSLLI 95.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510004}, primaryClass={cmp-lg cs.CL} }
rigau1995disambiguating
arxiv-668718
cmp-lg/9510005
Developing and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels
<|reference_start|>Developing and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels: We describe an approach to robust domain-independent syntactic parsing of unrestricted naturally-occurring (English) input. The technique involves parsing sequences of part-of-speech and punctuation labels using a unification-based grammar coupled with a probabilistic LR parser. We describe the coverage of several corpora using this grammar and report the results of a parsing experiment using probabilities derived from bracketed training data. We report the first substantial experiments to assess the contribution of punctuation to deriving an accurate syntactic analysis, by parsing identical texts both with and without naturally-occurring punctuation marks.<|reference_end|>
arxiv
@article{briscoe1995developing, title={Developing and Evaluating a Probabilistic LR Parser of Part-of-Speech and Punctuation Labels}, author={Ted Briscoe (Cambridge University), and John Carroll (Cambridge University)}, journal={4th International Workshop on Parsing Technologies (IWPT-95), 48-58}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510005}, primaryClass={cmp-lg cs.CL} }
briscoe1995developing
arxiv-668719
cmp-lg/9510006
Incorporating Discourse Aspects in English -- Polish MT: Towards Robust Implementation
<|reference_start|>Incorporating Discourse Aspects in English -- Polish MT: Towards Robust Implementation: The main aim of translation is an accurate transfer of meaning so that the result is not only grammatically and lexically correct but also communicatively adequate. This paper stresses the need for discourse analysis the aim of which is to preserve the communicative meaning in English--Polish machine translation. Unlike English, which is a positional language with word order grammatically determined, Polish displays a strong tendency to order constituents according to their degree of salience, so that the most informationally salient elements are placed towards the end of the clause regardless of their grammatical function. The Centering Theory developed for tracking down given information units in English and the Theory of Functional Sentence Perspective predicting informativeness of subsequent constituents provide theoretical background for this work. The notion of {\em center} is extended to accommodate not only for pronominalisation and exact reiteration but also for definiteness and other center pointing constructs. Center information is additionally graded and applicable to all primary constituents in a given utterance. This information is used to order the post-transfer constituents correctly, relying on statistical regularities and some syntactic clues.<|reference_end|>
arxiv
@article{stys1995incorporating, title={Incorporating Discourse Aspects in English -- Polish MT: Towards Robust Implementation}, author={Malgorzata E. Stys (University of Cambridge), Stefan S. Zemke (Linkoping University)}, journal={arXiv preprint arXiv:cmp-lg/9510006}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510006}, primaryClass={cmp-lg cs.CL} }
stys1995incorporating
arxiv-668720
cmp-lg/9510007
Automatic Identification of Support Verbs: A Step Towards a Definition of Semantic Weight
<|reference_start|>Automatic Identification of Support Verbs: A Step Towards a Definition of Semantic Weight: Current definitions of notions of lexical density and semantic weight are based on the division of words into closed and open classes, and on intuition. This paper develops a computationally tractable definition of semantic weight, concentrating on what it means for a word to be semantically light; the definition involves looking at the frequency of a word in particular syntactic constructions which are indicative of lightness. Verbs such as "make" and "take", when they function as support verbs, are often considered to be semantically light. To test our definition, we carried out an experiment based on that of Grefenstette and Teufel (1995), where we automatically identify light instances of these words in a corpus; this was done by incorporating our frequency-related definition of semantic weight into a statistical approach similar to that of Grefenstette and Teufel. The results show that this is a plausible definition of semantic lightness for verbs, which can possibly be extended to defining semantic lightness for other classes of words.<|reference_end|>
arxiv
@article{dras1995automatic, title={Automatic Identification of Support Verbs: A Step Towards a Definition of Semantic Weight}, author={Mark Dras (Natural Language Unit, Microsoft Institute, Australia)}, journal={arXiv preprint arXiv:cmp-lg/9510007}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510007}, primaryClass={cmp-lg cs.CL} }
dras1995automatic
arxiv-668721
cmp-lg/9510008
Toward an MT System without Pre-Editing --- Effects of New Methods in ALT-J/E ---
<|reference_start|>Toward an MT System without Pre-Editing --- Effects of New Methods in ALT-J/E ---: Recently, several types of Japanese-to-English machine translation systems have been developed, but all of them require an initial process of rewriting the original text into easily translatable Japanese. Therefore these systems are unsuitable for translating information that needs to be speedily disseminated. To overcome this limitation, a Multi-Level Translation Method based on the Constructive Process Theory has been proposed. This paper describes the benefits of using this method in the Japanese-to-English machine translation system ALT-J/E. In comparison with conventional compositional methods, the Multi-Level Translation Method emphasizes the importance of the meaning contained in expression structures as a whole. It is shown to be capable of translating typical written Japanese based on the meaning of the text in its context, with comparative ease. We are now hopeful of carrying out useful machine translation with no manual pre-editing.<|reference_end|>
arxiv
@article{ikehara1995toward, title={Toward an MT System without Pre-Editing --- Effects of New Methods in ALT-J/E ---}, author={Satoru Ikehara (NTT), Satoshi Shirai (NTT), Akio Yokoo (NTT), Hiromi Nakaiwa (NTT)}, journal={Proceedings of MT Summit III, 1991, 101-106.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9510008}, primaryClass={cmp-lg cs.CL} }
ikehara1995toward
arxiv-668722
cmp-lg/9511001
Countability and Number in Japanese-to-English Machine Translation
<|reference_start|>Countability and Number in Japanese-to-English Machine Translation: This paper presents a heuristic method that uses information in the Japanese text along with knowledge of English countability and number stored in transfer dictionaries to determine the countability and number of English noun phrases. Incorporating this method into the machine translation system ALT-J/E, helped to raise the percentage of noun phrases generated with correct use of articles and number from 65% to 73%.<|reference_end|>
arxiv
@article{bond1995countability, title={Countability and Number in Japanese-to-English Machine Translation}, author={Francis Bond (NTT), Kentaro Ogura (NTT), Satoru Ikehara (NTT)}, journal={Proceedings of the 15th International Conference on Computational Linguistics (COLING'94), pp 32--38.}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511001}, primaryClass={cmp-lg cs.CL} }
bond1995countability
arxiv-668723
cmp-lg/9511002
Letting the Cat out of the Bag: Generation for Shake-and-Bake MT
<|reference_start|>Letting the Cat out of the Bag: Generation for Shake-and-Bake MT: Describes an algorithm for the generation phase of a Shake-and-Bake Machine Translation system. Since the problem is NP-complete, it is unlikely that the algorithm will be efficient in all cases, but for the cases tested it offers an improvement over Whitelock's previously published algorithm. The work was carried out while the author was employed at Sharp Laboratories of Europe Ltd.<|reference_end|>
arxiv
@article{brew1995letting, title={Letting the Cat out of the Bag: Generation for Shake-and-Bake MT}, author={Chris Brew (Language Technology Group, HCRC, Edinburgh)}, journal={arXiv preprint arXiv:cmp-lg/9511002}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511002}, primaryClass={cmp-lg cs.CL} }
brew1995letting
arxiv-668724
cmp-lg/9511003
The Effect of Resource Limits and Task Complexity on Collaborative Planning in Dialogue
<|reference_start|>The Effect of Resource Limits and Task Complexity on Collaborative Planning in Dialogue: This paper shows how agents' choice in communicative action can be designed to mitigate the effect of their resource limits in the context of particular features of a collaborative planning task. I first motivate a number of hypotheses about effective language behavior based on a statistical analysis of a corpus of natural collaborative planning dialogues. These hypotheses are then tested in a dialogue testbed whose design is motivated by the corpus analysis. Experiments in the testbed examine the interaction between (1) agents' resource limits in attentional capacity and inferential capacity; (2) agents' choice in communication; and (3) features of communicative tasks that affect task difficulty such as inferential complexity, degree of belief coordination required, and tolerance for errors. The results show that good algorithms for communication must be defined relative to the agents' resource limits and the features of the task. Algorithms that are inefficient for inferentially simple, low coordination or fault-tolerant tasks are effective when tasks require coordination or complex inferences, or are fault-intolerant. The results provide an explanation for the occurrence of utterances in human dialogues that, prima facie, appear inefficient, and provide the basis for the design of effective algorithms for communicative choice for resource limited agents.<|reference_end|>
arxiv
@article{walker1995the, title={The Effect of Resource Limits and Task Complexity on Collaborative Planning in Dialogue}, author={Marilyn A. Walker}, journal={Artificial Intelligence Journal 85(1-2), pp. 181-243, 1996}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511003}, primaryClass={cmp-lg cs.CL} }
walker1995the
arxiv-668725
cmp-lg/9511004
An investigation into the correlation of cue phrases, unfilled pauses and the structuring of spoken discourse
<|reference_start|>An investigation into the correlation of cue phrases, unfilled pauses and the structuring of spoken discourse: Expectations about the correlation of cue phrases, the duration of unfilled pauses and the structuring of spoken discourse are framed in light of Grosz and Sidner's theory of discourse and are tested for a directions-giving dialogue. The results suggest that cue phrase and discourse structuring tasks may align, and show a correlation for pause length and some of the modifications that speakers can make to discourse structure.<|reference_end|>
arxiv
@article{cahn1995an, title={An investigation into the correlation of cue phrases, unfilled pauses and the structuring of spoken discourse}, author={Janet Cahn (Massachusetts Institute of Technology)}, journal={Proceedings of the IRCS Workshop on Prosody in Natural Speech, University of Pennsylvania. (1992) 19-30}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511004}, primaryClass={cmp-lg cs.CL} }
cahn1995an
arxiv-668726
cmp-lg/9511005
Chart-driven Connectionist Categorial Parsing of Spoken Korean
<|reference_start|>Chart-driven Connectionist Categorial Parsing of Spoken Korean: While most of the speech and natural language systems which were developed for English and other Indo-European languages neglect the morphological processing and integrate speech and natural language at the word level, for the agglutinative languages such as Korean and Japanese, the morphological processing plays a major role in the language processing since these languages have very complex morphological phenomena and relatively simple syntactic functionality. Obviously degenerated morphological processing limits the usable vocabulary size for the system and word-level dictionary results in exponential explosion in the number of dictionary entries. For the agglutinative languages, we need sub-word level integration which leaves rooms for general morphological processing. In this paper, we developed a phoneme-level integration model of speech and linguistic processings through general morphological analysis for agglutinative languages and a efficient parsing scheme for that integration. Korean is modeled lexically based on the categorial grammar formalism with unordered argument and suppressed category extensions, and chart-driven connectionist parsing method is introduced.<|reference_end|>
arxiv
@article{lee1995chart-driven, title={Chart-driven Connectionist Categorial Parsing of Spoken Korean}, author={WonIl Lee, Geunbae Lee, Jong-Hyeok Lee (POSTECH, Korea)}, journal={arXiv preprint arXiv:cmp-lg/9511005}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511005}, primaryClass={cmp-lg cs.CL} }
lee1995chart-driven
arxiv-668727
cmp-lg/9511006
Disambiguating Noun Groupings with Respect to WordNet Senses
<|reference_start|>Disambiguating Noun Groupings with Respect to WordNet Senses: Word groupings useful for language processing tasks are increasingly available, as thesauri appear on-line, and as distributional word clustering techniques improve. However, for many tasks, one is interested in relationships among word {\em senses}, not words. This paper presents a method for automatic sense disambiguation of nouns appearing within sets of related nouns --- the kind of data one finds in on-line thesauri, or as the output of distributional clustering algorithms. Disambiguation is performed with respect to WordNet senses, which are fairly fine-grained; however, the method also permits the assignment of higher-level WordNet categories rather than sense labels. The method is illustrated primarily by example, though results of a more rigorous evaluation are also presented.<|reference_end|>
arxiv
@article{resnik1995disambiguating, title={Disambiguating Noun Groupings with Respect to WordNet Senses}, author={Philip Resnik}, journal={Proceedings of the 3rd Workshop on Very Large Corpora, MIT, 30 June 1995}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511006}, primaryClass={cmp-lg cs.CL} }
resnik1995disambiguating
arxiv-668728
cmp-lg/9511007
Using Information Content to Evaluate Semantic Similarity in a Taxonomy
<|reference_start|>Using Information Content to Evaluate Semantic Similarity in a Taxonomy: This paper presents a new measure of semantic similarity in an IS-A taxonomy, based on the notion of information content. Experimental evaluation suggests that the measure performs encouragingly well (a correlation of r = 0.79 with a benchmark set of human similarity judgments, with an upper bound of r = 0.90 for human subjects performing the same task), and significantly better than the traditional edge counting approach (r = 0.66).<|reference_end|>
arxiv
@article{resnik1995using, title={Using Information Content to Evaluate Semantic Similarity in a Taxonomy}, author={Philip Resnik}, journal={Proceedings of the 14th International Joint Conference on Artificial Intelligence}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9511007}, primaryClass={cmp-lg cs.CL} }
resnik1995using
arxiv-668729
cmp-lg/9512001
Analysis of the Arabic Broken Plural and Diminutive
<|reference_start|>Analysis of the Arabic Broken Plural and Diminutive: This paper demonstrates how the challenging problem of the Arabic broken plural and diminutive can be handled under a multi-tape two-level model, an extension to two-level morphology.<|reference_end|>
arxiv
@article{kiraz1995analysis, title={Analysis of the Arabic Broken Plural and Diminutive}, author={George A. Kiraz (University of Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9512001}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9512001}, primaryClass={cmp-lg cs.CL} }
kiraz1995analysis
arxiv-668730
cmp-lg/9512002
The Unsupervised Acquisition of a Lexicon from Continuous Speech
<|reference_start|>The Unsupervised Acquisition of a Lexicon from Continuous Speech: We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.<|reference_end|>
arxiv
@article{de marcken1995the, title={The Unsupervised Acquisition of a Lexicon from Continuous Speech}, author={Carl de Marcken (MIT Artificial Intelligence Laboratory)}, journal={arXiv preprint arXiv:cmp-lg/9512002}, year={1995}, number={MIT AI Memo No. 1558/CBCL Memo No. 129}, archivePrefix={arXiv}, eprint={cmp-lg/9512002}, primaryClass={cmp-lg cs.CL} }
de marcken1995the
arxiv-668731
cmp-lg/9512003
Limited Attention and Discourse Structure
<|reference_start|>Limited Attention and Discourse Structure: This squib examines the role of limited attention in a theory of discourse structure and proposes a model of attentional state that relates current hierarchical theories of discourse structure to empirical evidence about human discourse processing capabilities. First, I present examples that are not predicted by Grosz and Sidner's stack model of attentional state. Then I consider an alternative model of attentional state, the cache model, which accounts for the examples, and which makes particular processing predictions. Finally I suggest a number of ways that future research could distinguish the predictions of the cache model and the stack model.<|reference_end|>
arxiv
@article{walker1995limited, title={Limited Attention and Discourse Structure}, author={Marilyn A. Walker}, journal={arXiv preprint arXiv:cmp-lg/9512003}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9512003}, primaryClass={cmp-lg cs.CL} }
walker1995limited
arxiv-668732
cmp-lg/9512004
Natural language processing: she needs something old and something new (maybe something borrowed and something blue, too)
<|reference_start|>Natural language processing: she needs something old and something new (maybe something borrowed and something blue, too): Given the present state of work in natural language processing, this address argues first, that advance in both science and applications requires a revival of concern about what language is about, broadly speaking the world; and second, that an attack on the summarising task, which is made ever more important by the growth of electronic text resources and requires an understanding of the role of large-scale discourse structure in marking important text content, is a good way forward.<|reference_end|>
arxiv
@article{jones1995natural, title={Natural language processing: she needs something old and something new (maybe something borrowed and something blue, too)}, author={Karen Sparck Jones (Computer Laboratory, University of Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9512004}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9512004}, primaryClass={cmp-lg cs.CL} }
jones1995natural
arxiv-668733
cmp-lg/9512005
Term Encoding of Typed Feature Structures
<|reference_start|>Term Encoding of Typed Feature Structures: This paper presents an approach to Prolog-style term encoding of typed feature structures. The type feature structures to be encoded are constrained by appropriateness conditions as in Carpenter's ALE system. But unlike ALE, we impose a further independently motivated closed-world assumption. This assumption allows us to apply term encoding in cases that were problematic for previous approaches. In particular, previous approaches have ruled out multiple inheritance and further specification of feature-value declarations on subtypes. In the present approach, these spececial cases can be handled as well, though with some increase in complexity. For grammars without multiple inheritance and specification of feature values, the encoding presented here reduces to that of previous approaches.<|reference_end|>
arxiv
@article{gerdemann1995term, title={Term Encoding of Typed Feature Structures}, author={Dale Gerdemann}, journal={Proceedings of the Fourth International Workshop on Parsing Technologies, pp. 89-98, 1995}, year={1995}, archivePrefix={arXiv}, eprint={cmp-lg/9512005}, primaryClass={cmp-lg cs.CL} }
gerdemann1995term
arxiv-668734
cmp-lg/9601001
Automatic Inference of DATR Theories
<|reference_start|>Automatic Inference of DATR Theories: This paper presents an approach for the automatic acquisition of linguistic knowledge from unstructured data. The acquired knowledge is represented in the lexical knowledge representation language DATR. A set of transformation rules that establish inheritance relationships and a default-inference algorithm make up the basis components of the system. Since the overall approach is not restricted to a special domain, the heuristic inference strategy uses criteria to evaluate the quality of a DATR theory, where different domains may require different criteria. The system is applied to the linguistic learning task of German noun inflection.<|reference_end|>
arxiv
@article{barg1996automatic, title={Automatic Inference of DATR Theories}, author={Petra Barg (University of Duesseldorf)}, journal={arXiv preprint arXiv:cmp-lg/9601001}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601001}, primaryClass={cmp-lg cs.CL} }
barg1996automatic
arxiv-668735
cmp-lg/9601002
Generic rules and non-constituent coordination
<|reference_start|>Generic rules and non-constituent coordination: We present a metagrammatical formalism, {\em generic rules}, to give a default interpretation to grammar rules. Our formalism introduces a process of {\em dynamic binding} interfacing the level of pure grammatical knowledge representation and the parsing level. We present an approach to non-constituent coordination within categorial grammars, and reformulate it as a generic rule. This reformulation is context-free parsable and reduces drastically the search space associated to the parsing task for such phenomena.<|reference_end|>
arxiv
@article{gonzalo1996generic, title={Generic rules and non-constituent coordination}, author={Julio Gonzalo and Teresa Solias}, journal={IV International Workshop on Parsing Technologies (IWPT 95)}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601002}, primaryClass={cmp-lg cs.CL} }
gonzalo1996generic
arxiv-668736
cmp-lg/9601003
Report of the Study Group on Assessment and Evaluation
<|reference_start|>Report of the Study Group on Assessment and Evaluation: This is an interim report discussing possible guidelines for the assessment and evaluation of projects developing speech and language systems. It was prepared at the request of the European Commission DG XIII by an ad hoc study group, and is now being made available in the form in which it was submitted to the Commission. However, the report is not an official European Commission document, and does not reflect European Commission policy, official or otherwise. After a discussion of terminology, the report focusses on combining user-centred and technology-centred assessment, and on how meaningful comparisons can be made of a variety of systems performing different tasks for different domains. The report outlines the kind of infra-structure that might be required to support comparative assessment and evaluation of heterogenous projects, and also the results of a questionnaire concerning different approaches to evaluation.<|reference_end|>
arxiv
@article{crouch1996report, title={Report of the Study Group on Assessment and Evaluation}, author={Richard Crouch (SRI International, Cambridge), Robert Gaizauskas (Sheffield University), Klaus Netter (DFKI, Saarbruecken)}, journal={arXiv preprint arXiv:cmp-lg/9601003}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601003}, primaryClass={cmp-lg cs.CL} }
crouch1996report
arxiv-668737
cmp-lg/9601004
Similarity between Words Computed by Spreading Activation on an English Dictionary
<|reference_start|>Similarity between Words Computed by Spreading Activation on an English Dictionary: This paper proposes a method for measuring semantic similarity between words as a new tool for text analysis. The similarity is measured on a semantic network constructed systematically from a subset of the English dictionary, LDOCE (Longman Dictionary of Contemporary English). Spreading activation on the network can directly compute the similarity between any two words in the Longman Defining Vocabulary, and indirectly the similarity of all the other words in LDOCE. The similarity represents the strength of lexical cohesion or semantic relation, and also provides valuable information about similarity and coherence of texts.<|reference_end|>
arxiv
@article{kozima1996similarity, title={Similarity between Words Computed by Spreading Activation on an English Dictionary}, author={Hideki Kozima and Teiji Furugori (University of Electro-Communications, Japan)}, journal={Proceedings of EACL-93 (Utrecht), pp.232-239, 1993.}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601004}, primaryClass={cmp-lg cs.CL} }
kozima1996similarity
arxiv-668738
cmp-lg/9601005
Text Segmentation Based on Similarity between Words
<|reference_start|>Text Segmentation Based on Similarity between Words: This paper proposes a new indicator of text structure, called the lexical cohesion profile (LCP), which locates segment boundaries in a text. A text segment is a coherent scene; the words in a segment are linked together via lexical cohesion relations. LCP records mutual similarity of words in a sequence of text. The similarity of words, which represents their cohesiveness, is computed using a semantic network. Comparison with the text segments marked by a number of subjects shows that LCP closely correlates with the human judgments. LCP may provide valuable information for resolving anaphora and ellipsis.<|reference_end|>
arxiv
@article{kozima1996text, title={Text Segmentation Based on Similarity between Words}, author={Hideki Kozima (University of Electro-Communications, Japan)}, journal={Proceedings of ACL-93 (Ohio), pp.286-288, 1993.}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601005}, primaryClass={cmp-lg cs.CL} }
kozima1996text
arxiv-668739
cmp-lg/9601006
Possessive Pronouns as Determiners in Japanese-to-English Machine Translation
<|reference_start|>Possessive Pronouns as Determiners in Japanese-to-English Machine Translation: Possessive pronouns are used as determiners in English when no equivalent would be used in a Japanese sentence with the same meaning. This paper proposes a heuristic method of generating such possessive pronouns even when there is no equivalent in the Japanese. The method uses information about the use of possessive pronouns in English treated as a lexical property of nouns, in addition to contextual information about noun phrase referentiality and the subject and main verb of the sentence that the noun phrase appears in. The proposed method has been implemented in NTT Communication Science Laboratories' Japanese-to-English machine translation system ALT-J/E. In a test set of 6,200 sentences, the proposed method increased the number of noun phrases with appropriate possessive pronouns generated, by 263 to 609, at the cost of generating 83 noun phrases with inappropriate possessive pronouns.<|reference_end|>
arxiv
@article{bond1996possessive, title={Possessive Pronouns as Determiners in Japanese-to-English Machine Translation}, author={Francis Bond (NTT), Kentaro Ogura (NTT), Satoru Ikehara (NTT)}, journal={arXiv preprint arXiv:cmp-lg/9601006}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601006}, primaryClass={cmp-lg cs.CL} }
bond1996possessive
arxiv-668740
cmp-lg/9601007
Context-Sensitive Measurement of Word Distance by Adaptive Scaling of a Semantic Space
<|reference_start|>Context-Sensitive Measurement of Word Distance by Adaptive Scaling of a Semantic Space: The paper proposes a computationally feasible method for measuring context-sensitive semantic distance between words. The distance is computed by adaptive scaling of a semantic space. In the semantic space, each word in the vocabulary V is represented by a multi-dimensional vector which is obtained from an English dictionary through a principal component analysis. Given a word set C which specifies a context for measuring word distance, each dimension of the semantic space is scaled up or down according to the distribution of C in the semantic space. In the space thus transformed, distance between words in V becomes dependent on the context C. An evaluation through a word prediction task shows that the proposed measurement successfully extracts the context of a text.<|reference_end|>
arxiv
@article{kozima1996context-sensitive, title={Context-Sensitive Measurement of Word Distance by Adaptive Scaling of a Semantic Space}, author={Hideki Kozima, Akira Ito (Communications Research Laboratory, Japan)}, journal={arXiv preprint arXiv:cmp-lg/9601007}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601007}, primaryClass={cmp-lg cs.CL} }
kozima1996context-sensitive
arxiv-668741
cmp-lg/9601008
Noun Phrase Reference in Japanese-to-English Machine Translation
<|reference_start|>Noun Phrase Reference in Japanese-to-English Machine Translation: This paper shows the necessity of distinguishing different referential uses of noun phrases in machine translation. We argue that differentiating between the generic, referential and ascriptive uses of noun phrases is the minimum necessary to generate articles and number correctly when translating from Japanese to English. Heuristics for determining these differences are proposed for a Japanese-to-English machine translation system. Finally the results of using the proposed heuristics are shown to have raised the percentage of noun phrases generated with correct use of articles and number in the Japanese-to-English machine translation system ALT-J/E from 65% to 77%.<|reference_end|>
arxiv
@article{bond1996noun, title={Noun Phrase Reference in Japanese-to-English Machine Translation}, author={Francis Bond (NTT), Kentaro Ogura (NTT), Tsukasa Kawaoka (Doshisha University)}, journal={arXiv preprint arXiv:cmp-lg/9601008}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601008}, primaryClass={cmp-lg cs.CL} }
bond1996noun
arxiv-668742
cmp-lg/9601009
A General Architecture for Language Engineering (GATE) - a new approach to Language Engineering R&D
<|reference_start|>A General Architecture for Language Engineering (GATE) - a new approach to Language Engineering R&D: This report argues for the provision of a common software infrastructure for NLP systems. Current trends in Language Engineering research are reviewed as motivation for this infrastructure, and relevant recent work discussed. A freely-available system called GATE is described which builds on this work.<|reference_end|>
arxiv
@article{cunningham1996a, title={A General Architecture for Language Engineering (GATE) - a new approach to Language Engineering R&D}, author={Hamish Cunningham, Robert J. Gaizauskas, Yorick Wilks (Institute for Language Speech and Hearing (ILASH) / Department of Computer Science, University of Sheffield, UK)}, journal={arXiv preprint arXiv:cmp-lg/9601009}, year={1996}, number={CS - 95 - 21}, archivePrefix={arXiv}, eprint={cmp-lg/9601009}, primaryClass={cmp-lg cs.CL} }
cunningham1996a
arxiv-668743
cmp-lg/9601010
Parsing with Typed Feature Structures
<|reference_start|>Parsing with Typed Feature Structures: In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE. Our motivation being the design of an abstract (WAM-like) machine for the formalism, we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of abstract typed feature structures (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of multi-rooted feature structures (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We then present a simple bottom-up chart parser as a model for computation: grammars written in the TFS-based formalism are executed by the parser. Finally, we show that the parser is correct.<|reference_end|>
arxiv
@article{wintner1996parsing, title={Parsing with Typed Feature Structures}, author={Shuly Wintner and Nissim Francez (Computer Science, Technion, Israel Institute of Technology, Haifa, Israel)}, journal={arXiv preprint arXiv:cmp-lg/9601010}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9601010}, primaryClass={cmp-lg cs.CL} }
wintner1996parsing
arxiv-668744
cmp-lg/9601011
Parsing with Typed Feature Structures
<|reference_start|>Parsing with Typed Feature Structures: In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE. Our motivation being the design of an abstract (WAM-like) machine for the formalism, we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of abstract typed feature structures (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of multi-rooted feature structures (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We formally define grammars and the languages they generate, and then describe a model for computation that corresponds to bottom-up chart parsing: grammars written in the TFS-based formalism are executed by the parser. We show that the computation is correct with respect to the independent definition. Finally, we discuss the class of grammars for which computations terminate and prove that termination can be guaranteed for off-line parsable grammars.<|reference_end|>
arxiv
@article{wintner1996parsing, title={Parsing with Typed Feature Structures}, author={Shuly Wintner and Nissim Francez (Computer Science, Technion, Israel Institute of Technology, Haifa, Israel)}, journal={arXiv preprint arXiv:cmp-lg/9601011}, year={1996}, number={Laboratory for Computational Linguistics TR #LCL 95-1}, archivePrefix={arXiv}, eprint={cmp-lg/9601011}, primaryClass={cmp-lg cs.CL} }
wintner1996parsing
arxiv-668745
cmp-lg/9602001
How Part-of-Speech Tags Affect Text Retrieval and Filtering Performance
<|reference_start|>How Part-of-Speech Tags Affect Text Retrieval and Filtering Performance: Natural language processing (NLP) applied to information retrieval (IR) and filtering problems may assign part-of-speech tags to terms and, more generally, modify queries and documents. Analytic models can predict the performance of a text filtering system as it incorporates changes suggested by NLP, allowing us to make precise statements about the average effect of NLP operations on IR. Here we provide a model of retrieval and tagging that allows us to both compute the performance change due to syntactic parsing and to allow us to understand what factors affect performance and how. In addition to a prediction of performance with tags, upper and lower bounds for retrieval performance are derived, giving the best and worst effects of including part-of-speech tags. Empirical grounds for selecting sets of tags are considered.<|reference_end|>
arxiv
@article{losee1996how, title={How Part-of-Speech Tags Affect Text Retrieval and Filtering Performance}, author={Robert M. Losee (University of North Carolina Chapel Hill, NC, U.S.A.)}, journal={arXiv preprint arXiv:cmp-lg/9602001}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9602001}, primaryClass={cmp-lg cs.CL} }
losee1996how
arxiv-668746
cmp-lg/9602002
Situations and Computation: An Overview of Recent Research
<|reference_start|>Situations and Computation: An Overview of Recent Research: Serious thinking about the computational aspects of situation theory is just starting. There have been some recent proposals in this direction (viz. PROSIT and ASTL), with varying degrees of divergence from the ontology of the theory. We believe that a programming environment incorporating bona fide situation-theoretic constructs is needed and describe our very recent BABY-SIT implementation. A detailed critical account of PROSIT and ASTL is also offered in order to compare our system with these pioneering and influential frameworks.<|reference_end|>
arxiv
@article{tin1996situations, title={Situations and Computation: An Overview of Recent Research}, author={Erkan TIN (Bilkent University) and Varol AKMAN (Bilkent University)}, journal={arXiv preprint arXiv:cmp-lg/9602002}, year={1996}, number={SfS-Report-04-95}, archivePrefix={arXiv}, eprint={cmp-lg/9602002}, primaryClass={cmp-lg cs.CL} }
tin1996situations
arxiv-668747
cmp-lg/9602003
Text Windows and Phrases Differing by Discipline, Location in Document, and Syntactic Structure
<|reference_start|>Text Windows and Phrases Differing by Discipline, Location in Document, and Syntactic Structure: Knowledge of window style, content, location and grammatical structure may be used to classify documents as originating within a particular discipline or may be used to place a document on a theory versus practice spectrum. This distinction is also studied here using the type-token ratio to differentiate between sublanguages. The statistical significance of windows is computed, based on the the presence of terms in titles, abstracts, citations, and section headers, as well as binary independent (BI) and inverse document frequency (IDF) weightings. The characteristics of windows are studied by examining their within window density (WWD) and the S concentration (SC), the concentration of terms from various document fields (e.g. title, abstract) in the fulltext. The rate of window occurrences from the beginning to the end of document fulltext differs between academic fields. Different syntactic structures in sublanguages are examined, and their use is considered for discriminating between specific academic disciplines and, more generally, between theory versus practice or knowledge versus applications oriented documents.<|reference_end|>
arxiv
@article{losee1996text, title={Text Windows and Phrases Differing by Discipline, Location in Document, and Syntactic Structure}, author={Robert M. Losee (University of North Carolina Chapel Hill)}, journal={arXiv preprint arXiv:cmp-lg/9602003}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9602003}, primaryClass={cmp-lg cs.CL} }
losee1996text
arxiv-668748
cmp-lg/9602004
Assessing agreement on classification tasks: the kappa statistic
<|reference_start|>Assessing agreement on classification tasks: the kappa statistic: Currently, computational linguists and cognitive scientists working in the area of discourse and dialogue argue that their subjective judgments are reliable using several different statistics, none of which are easily interpretable or comparable to each other. Meanwhile, researchers in content analysis have already experienced the same difficulties and come up with a solution in the kappa statistic. We discuss what is wrong with reliability measures as they are currently used for discourse and dialogue work in computational linguistics and cognitive science, and argue that we would be better off as a field adopting techniques from content analysis.<|reference_end|>
arxiv
@article{carletta1996assessing, title={Assessing agreement on classification tasks: the kappa statistic}, author={Jean Carletta (University of Edinburgh)}, journal={Computational Lingustics 22:2 (1996 forthcoming)}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9602004}, primaryClass={cmp-lg cs.CL} }
carletta1996assessing
arxiv-668749
cmp-lg/9603001
Speech Recognition by Composition of Weighted Finite Automata
<|reference_start|>Speech Recognition by Composition of Weighted Finite Automata: We present a general framework based on weighted finite automata and weighted finite-state transducers for describing and implementing speech recognizers. The framework allows us to represent uniformly the information sources and data structures used in recognition, including context-dependent units, pronunciation dictionaries, language models and lattices. Furthermore, general but efficient algorithms can used for combining information sources in actual recognizers and for optimizing their application. In particular, a single composition algorithm is used both to combine in advance information sources such as language models and dictionaries, and to combine acoustic observations and information sources dynamically during recognition.<|reference_end|>
arxiv
@article{pereira1996speech, title={Speech Recognition by Composition of Weighted Finite Automata}, author={Fernando C. N. Pereira and Michael D. Riley (AT&T Research)}, journal={arXiv preprint arXiv:cmp-lg/9603001}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603001}, primaryClass={cmp-lg cs.CL} }
pereira1996speech
arxiv-668750
cmp-lg/9603002
Finite-State Approximation of Phrase-Structure Grammars
<|reference_start|>Finite-State Approximation of Phrase-Structure Grammars: Phrase-structure grammars are effective models for important syntactic and semantic aspects of natural languages, but can be computationally too demanding for use as language models in real-time speech recognition. Therefore, finite-state models are used instead, even though they lack expressive power. To reconcile those two alternatives, we designed an algorithm to compute finite-state approximations of context-free grammars and context-free-equivalent augmented phrase-structure grammars. The approximation is exact for certain context-free grammars generating regular languages, including all left-linear and right-linear context-free grammars. The algorithm has been used to build finite-state language models for limited-domain speech recognition tasks.<|reference_end|>
arxiv
@article{pereira1996finite-state, title={Finite-State Approximation of Phrase-Structure Grammars}, author={Fernando C. N. Pereira and Rebecca N. Wright (AT&T Research)}, journal={arXiv preprint arXiv:cmp-lg/9603002}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603002}, primaryClass={cmp-lg cs.CL} }
pereira1996finite-state
arxiv-668751
cmp-lg/9603003
Attempto Controlled English (ACE)
<|reference_start|>Attempto Controlled English (ACE): Attempto Controlled English (ACE) allows domain specialists to interactively formulate requirements specifications in domain concepts. ACE can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specification texts in ACE into discourse representation structures and optionally into Prolog. Translated specification texts are incrementally added to a knowledge base. This knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.<|reference_end|>
arxiv
@article{fuchs1996attempto, title={Attempto Controlled English (ACE)}, author={Norbert E. Fuchs, Rolf Schwitter (Department of Computer Science, University of Zurich)}, journal={arXiv preprint arXiv:cmp-lg/9603003}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603003}, primaryClass={cmp-lg cs.CL} }
fuchs1996attempto
arxiv-668752
cmp-lg/9603004
Attempto - From Specifications in Controlled Natural Language towards Executable Specifications
<|reference_start|>Attempto - From Specifications in Controlled Natural Language towards Executable Specifications: Deriving formal specifications from informal requirements is difficult since one has to take into account the disparate conceptual worlds of the application domain and of software development. To bridge the conceptual gap we propose controlled natural language as a textual view on formal specifications in logic. The specification language Attempto Controlled English (ACE) is a subset of natural language that can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage. The Attempto system translates specifications in ACE into discourse representation structures and into Prolog. The resulting knowledge base can be queried in ACE for verification, and it can be executed for simulation, prototyping and validation of the specification.<|reference_end|>
arxiv
@article{schwitter1996attempto, title={Attempto - From Specifications in Controlled Natural Language towards Executable Specifications}, author={Rolf Schwitter, Norbert E. Fuchs, (Department of Computer Science, University of Zurich)}, journal={arXiv preprint arXiv:cmp-lg/9603004}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603004}, primaryClass={cmp-lg cs.CL} }
schwitter1996attempto
arxiv-668753
cmp-lg/9603005
Integrated speech and morphological processing in a connectionist continuous speech understanding for Korean
<|reference_start|>Integrated speech and morphological processing in a connectionist continuous speech understanding for Korean: A new tightly coupled speech and natural language integration model is presented for a TDNN-based continuous possibly large vocabulary speech recognition system for Korean. Unlike popular n-best techniques developed for integrating mainly HMM-based speech recognition and natural language processing in a {\em word level}, which is obviously inadequate for morphologically complex agglutinative languages, our model constructs a spoken language system based on a {\em morpheme-level} speech and language integration. With this integration scheme, the spoken Korean processing engine (SKOPE) is designed and implemented using a TDNN-based diphone recognition module integrated with a Viterbi-based lexical decoding and symbolic phonological/morphological co-analysis. Our experiment results show that the speaker-dependent continuous {\em eojeol} (Korean word) recognition and integrated morphological analysis can be achieved with over 80.6% success rate directly from speech inputs for the middle-level vocabularies.<|reference_end|>
arxiv
@article{lee1996integrated, title={Integrated speech and morphological processing in a connectionist continuous speech understanding for Korean}, author={Geunbae Lee, Jong-Hyeok Lee (Department of Computer Science and Engineering Pohang University of Science and Technology)}, journal={arXiv preprint arXiv:cmp-lg/9603005}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603005}, primaryClass={cmp-lg cs.CL} }
lee1996integrated
arxiv-668754
cmp-lg/9603006
Extraction of V-N-Collocations from Text Corpora: A Feasibility Study for German
<|reference_start|>Extraction of V-N-Collocations from Text Corpora: A Feasibility Study for German: The usefulness of a statistical approach suggested by Church et al. (1991) is evaluated for the extraction of verb-noun (V-N) collocations from German text corpora. Some problematic issues of that method arising from properties of the German language are discussed and various modifications of the method are considered that might improve extraction results for German. The precision and recall of all variant methods is evaluated for V-N collocations containing support verbs, and the consequences for further work on the extraction of collocations from German corpora are discussed. With a sufficiently large corpus (>= 6 mio. word-tokens), the average error rate of wrong extractions can be reduced to 2.2% (97.8% precision) with the most restrictive method, however with a loss in data of almost 50% compared to a less restrictive method with still 87.6% precision. Depending on the goal to be achieved, emphasis can be put on a high recall for lexicographic purposes or on high precision for automatic lexical acquisition, in each case unfortunately leading to a decrease of the corresponding other variable. Low recall can still be acceptable if very large corpora (i.e. 50 - 100 million words) are available or if corpora for special domains are used in addition to the data found in machine readable (collocation) dictionaries.<|reference_end|>
arxiv
@article{breidt1996extraction, title={Extraction of V-N-Collocations from Text Corpora: A Feasibility Study for German}, author={Elisabeth Breidt (Seminar f"ur Sprachwissenschaft, Universit"at T"ubingen, Germany)}, journal={arXiv preprint arXiv:cmp-lg/9603006}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9603006}, primaryClass={cmp-lg cs.CL} }
breidt1996extraction
arxiv-668755
cmp-lg/9604001
Combining Hand-crafted Rules and Unsupervised Learning in Constraint-based Morphological Disambiguation
<|reference_start|>Combining Hand-crafted Rules and Unsupervised Learning in Constraint-based Morphological Disambiguation: This paper presents a constraint-based morphological disambiguation approach that is applicable languages with complex morphology--specifically agglutinative languages with productive inflectional and derivational morphological phenomena. In certain respects, our approach has been motivated by Brill's recent work, but with the observation that his transformational approach is not directly applicable to languages like Turkish. Our system combines corpus independent hand-crafted constraint rules, constraint rules that are learned via unsupervised learning from a training corpus, and additional statistical information from the corpus to be morphologically disambiguated. The hand-crafted rules are linguistically motivated and tuned to improve precision without sacrificing recall. The unsupervised learning process produces two sets of rules: (i) choose rules which choose morphological parses of a lexical item satisfying constraint effectively discarding other parses, and (ii) delete rules, which delete parses satisfying a constraint. Our approach also uses a novel approach to unknown word processing by employing a secondary morphological processor which recovers any relevant inflectional and derivational information from a lexical item whose root is unknown. With this approach, well below 1 percent of the tokens remains as unknown in the texts we have experimented with. Our results indicate that by combining these hand-crafted,statistical and learned information sources, we can attain a recall of 96 to 97 percent with a corresponding precision of 93 to 94 percent, and ambiguity of 1.02 to 1.03 parses per token.<|reference_end|>
arxiv
@article{oflazer1996combining, title={Combining Hand-crafted Rules and Unsupervised Learning in Constraint-based Morphological Disambiguation}, author={Kemal Oflazer and Gokhan Tur (Department of Computer Engineering, Bilkent University, Ankara Turkey)}, journal={arXiv preprint arXiv:cmp-lg/9604001}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604001}, primaryClass={cmp-lg cs.CL} }
oflazer1996combining
arxiv-668756
cmp-lg/9604002
A Constraint-based Case Frame Lexicon
<|reference_start|>A Constraint-based Case Frame Lexicon: We present a constraint-based case frame lexicon architecture for bi-directional mapping between a syntactic case frame and a semantic frame. The lexicon uses a semantic sense as the basic unit and employs a multi-tiered constraint structure for the resolution of syntactic information into the appropriate senses and/or idiomatic usage. Valency changing transformations such as morphologically marked passivized or causativized forms are handled via lexical rules that manipulate case frames templates. The system has been implemented in a typed-feature system and applied to Turkish.<|reference_end|>
arxiv
@article{oflazer1996a, title={A Constraint-based Case Frame Lexicon}, author={Kemal Oflazer and Okan Yilmaz (Department of Computer Engineering, Bilkent University, Ankara, Turkey)}, journal={arXiv preprint arXiv:cmp-lg/9604002}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604002}, primaryClass={cmp-lg cs.CL} }
oflazer1996a
arxiv-668757
cmp-lg/9604003
Error-tolerant Tree Matching
<|reference_start|>Error-tolerant Tree Matching: This paper presents an efficient algorithm for retrieving from a database of trees, all trees that match a given query tree approximately, that is, within a certain error tolerance. It has natural language processing applications in searching for matches in example-based translation systems, and retrieval from lexical databases containing entries of complex feature structures. The algorithm has been implemented on SparcStations, and for large randomly generated synthetic tree databases (some having tens of thousands of trees) it can associatively search for trees with a small error, in a matter of tenths of a second to few seconds.<|reference_end|>
arxiv
@article{oflazer1996error-tolerant, title={Error-tolerant Tree Matching}, author={Kemal Oflazer (Department of Computer Engineering, Bilkent University, Ankara, Turkey)}, journal={arXiv preprint arXiv:cmp-lg/9604003}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604003}, primaryClass={cmp-lg cs.CL} }
oflazer1996error-tolerant
arxiv-668758
cmp-lg/9604004
Apportioning Development Effort in a Probabilistic LR Parsing System through Evaluation
<|reference_start|>Apportioning Development Effort in a Probabilistic LR Parsing System through Evaluation: We describe an implemented system for robust domain-independent syntactic parsing of English, using a unification-based grammar of part-of-speech and punctuation labels coupled with a probabilistic LR parser. We present evaluations of the system's performance along several different dimensions; these enable us to assess the contribution that each individual part is making to the success of the system as a whole, and thus prioritise the effort to be devoted to its further enhancement. Currently, the system is able to parse around 80% of sentences in a substantial corpus of general text containing a number of distinct genres. On a random sample of 250 such sentences the system has a mean crossing bracket rate of 0.71 and recall and precision of 83% and 84% respectively when evaluated against manually-disambiguated analyses.<|reference_end|>
arxiv
@article{carroll1996apportioning, title={Apportioning Development Effort in a Probabilistic LR Parsing System through Evaluation}, author={John Carroll (University of Sussex) and Ted Briscoe (University of Cambridge)}, journal={Conference on Empirical Methods in Natural Language Processing (EMNLP-96), 92-100}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604004}, primaryClass={cmp-lg cs.CL} }
carroll1996apportioning
arxiv-668759
cmp-lg/9604005
Better Language Models with Model Merging
<|reference_start|>Better Language Models with Model Merging: This paper investigates model merging, a technique for deriving Markov models from text or speech corpora. Models are derived by starting with a large and specific model and by successively combining states to build smaller and more general models. We present methods to reduce the time complexity of the algorithm and report on experiments on deriving language models for a speech recognition task. The experiments show the advantage of model merging over the standard bigram approach. The merged model assigns a lower perplexity to the test set and uses considerably fewer states.<|reference_end|>
arxiv
@article{brants1996better, title={Better Language Models with Model Merging}, author={Thorsten Brants (Universit"at des Saarlandes, Computational Linguistics, Saarbr"ucken, Germany)}, journal={arXiv preprint arXiv:cmp-lg/9604005}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604005}, primaryClass={cmp-lg cs.CL} }
brants1996better
arxiv-668760
cmp-lg/9604006
The Role of the Gricean Maxims in the Generation of Referring Expressions
<|reference_start|>The Role of the Gricean Maxims in the Generation of Referring Expressions: Grice's maxims of conversation [Grice 1975] are framed as directives to be followed by a speaker of the language. This paper argues that, when considered from the point of view of natural language generation, such a characterisation is rather misleading, and that the desired behaviour falls out quite naturally if we view language generation as a goal-oriented process. We argue this position with particular regard to the generation of referring expressions.<|reference_end|>
arxiv
@article{dale1996the, title={The Role of the Gricean Maxims in the Generation of Referring Expressions}, author={Robert Dale (Microsoft Institute, Sydney, Australia) and Ehud Reiter (University of Aberdeen, UK)}, journal={arXiv preprint arXiv:cmp-lg/9604006}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604006}, primaryClass={cmp-lg cs.CL} }
dale1996the
arxiv-668761
cmp-lg/9604007
Collocational Grammar
<|reference_start|>Collocational Grammar: A perspective of statistical language models which emphasizes their collocational aspect is advocated. It is suggested that strings be generalized in terms of classes of relationships instead of classes of objects. The single most important characteristic of such a model is a mechanism for comparing patterns. When patterns are fully generalized a natural definition of syntactic class emerges as a subset of relational class. These collocational syntactic classes should be an unambiguous partition of traditional syntactic classes.<|reference_end|>
arxiv
@article{freeman1996collocational, title={Collocational Grammar}, author={Robert John Freeman (Hong Kong University of Science and Technology)}, journal={arXiv preprint arXiv:cmp-lg/9604007}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604007}, primaryClass={cmp-lg cs.CL} }
freeman1996collocational
arxiv-668762
cmp-lg/9604008
Efficient Algorithms for Parsing the DOP Model
<|reference_start|>Efficient Algorithms for Parsing the DOP Model: Excellent results have been reported for Data-Oriented Parsing (DOP) of natural language texts (Bod, 1993). Unfortunately, existing algorithms are both computationally intensive and difficult to implement. Previous algorithms are expensive due to two factors: the exponential number of rules that must be generated and the use of a Monte Carlo parsing algorithm. In this paper we solve the first problem by a novel reduction of the DOP model to a small, equivalent probabilistic context-free grammar. We solve the second problem by a novel deterministic parsing strategy that maximizes the expected number of correct constituents, rather than the probability of a correct parse tree. Using the optimizations, experiments yield a 97% crossing brackets rate and 88% zero crossing brackets rate. This differs significantly from the results reported by Bod, and is comparable to results from a duplication of Pereira and Schabes's (1992) experiment on the same data. We show that Bod's results are at least partially due to an extremely fortuitous choice of test data, and partially due to using cleaner data than other researchers.<|reference_end|>
arxiv
@article{goodman1996efficient, title={Efficient Algorithms for Parsing the DOP Model}, author={Joshua Goodman (Harvard University)}, journal={Proceedings of the Conference on Empirical Methods in Natural Language Processing, May 1996}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604008}, primaryClass={cmp-lg cs.CL} }
goodman1996efficient
arxiv-668763
cmp-lg/9604009
Another Facet of LIG Parsing
<|reference_start|>Another Facet of LIG Parsing: In this paper we present a new parsing algorithm for linear indexed grammars (LIGs) in the same spirit as the one described in (Vijay-Shanker and Weir, 1993) for tree adjoining grammars. For a LIG $L$ and an input string $x$ of length $n$, we build a non ambiguous context-free grammar whose sentences are all (and exclusively) valid derivation sequences in $L$ which lead to $x$. We show that this grammar can be built in ${\cal O}(n^6)$ time and that individual parses can be extracted in linear time with the size of the extracted parse tree. Though this ${\cal O}(n^6)$ upper bound does not improve over previous results, the average case behaves much better. Moreover, practical parsing times can be decreased by some statically performed computations.<|reference_end|>
arxiv
@article{boullier1996another, title={Another Facet of LIG Parsing}, author={Pierre Boullier (INRIA-Rocquencourt, Le Chesnay Cedex, France)}, journal={arXiv preprint arXiv:cmp-lg/9604009}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604009}, primaryClass={cmp-lg cs.CL} }
boullier1996another
arxiv-668764
cmp-lg/9604010
Off-line Constraint Propagation for Efficient HPSG Processing
<|reference_start|>Off-line Constraint Propagation for Efficient HPSG Processing: We investigate the use of a technique developed in the constraint programming community called constraint propagation to automatically make a HPSG theory more specific at those places where linguistically motivated underspecification would lead to inefficient processing. We discuss two concrete HPSG examples showing how off-line constraint propagation helps improve processing efficiency.<|reference_end|>
arxiv
@article{meurers1996off-line, title={Off-line Constraint Propagation for Efficient HPSG Processing}, author={Walt Detmar Meurers, Guido Minnen (SFB 340, Univ. Tuebingen)}, journal={Proceedings HPSG/TALN Conference, Marseille, France, May 20-22}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604010}, primaryClass={cmp-lg cs.CL} }
meurers1996off-line
arxiv-668765
cmp-lg/9604011
Multi-level post-processing for Korean character recognition using morphological analysis and linguistic evaluation
<|reference_start|>Multi-level post-processing for Korean character recognition using morphological analysis and linguistic evaluation: Most of the post-processing methods for character recognition rely on contextual information of character and word-fragment levels. However, due to linguistic characteristics of Korean, such low-level information alone is not sufficient for high-quality character-recognition applications, and we need much higher-level contextual information to improve the recognition results. This paper presents a domain independent post-processing technique that utilizes multi-level morphological, syntactic, and semantic information as well as character-level information. The proposed post-processing system performs three-level processing: candidate character-set selection, candidate eojeol (Korean word) generation through morphological analysis, and final single eojeol-sequence selection by linguistic evaluation. All the required linguistic information and probabilities are automatically acquired from a statistical corpus analysis. Experimental results demonstrate the effectiveness of our method, yielding error correction rate of 80.46%, and improved recognition rate of 95.53% from before-post-processing rate 71.2% for single best-solution selection.<|reference_end|>
arxiv
@article{lee1996multi-level, title={Multi-level post-processing for Korean character recognition using morphological analysis and linguistic evaluation}, author={Geunbae Lee, Jong-Hyeok Lee, JinHee Yoo (Department of Computer Science and Engineering, Pohang University of Science and Technology, Pohang, Korea)}, journal={arXiv preprint arXiv:cmp-lg/9604011}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604011}, primaryClass={cmp-lg cs.CL} }
lee1996multi-level
arxiv-668766
cmp-lg/9604012
SemHe: A Generalised Two-Level System
<|reference_start|>SemHe: A Generalised Two-Level System: This paper presents a generalised two-level implementation which can handle linear and non-linear morphological operations. An algorithm for the interpretation of multi-tape two-level rules is described. In addition, a number of issues which arise when developing non-linear grammars are discussed with examples from Syriac.<|reference_end|>
arxiv
@article{kiraz1996semhe:, title={SemHe: A Generalised Two-Level System}, author={George Anton Kiraz (University of Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9604012}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604012}, primaryClass={cmp-lg cs.CL} }
kiraz1996semhe:
arxiv-668767
cmp-lg/9604013
Syntactic Analyses for Parallel Grammars: Auxiliaries and Genitive NPs
<|reference_start|>Syntactic Analyses for Parallel Grammars: Auxiliaries and Genitive NPs: This paper focuses on two disparate aspects of German syntax from the perspective of parallel grammar development. As part of a cooperative project, we present an innovative approach to auxiliaries and multiple genitive NPs in German. The LFG-based implementation presented here avoids unnessary structural complexity in the representation of auxiliaries by challenging the traditional analysis of auxiliaries as raising verbs. The approach developed for multiple genitive NPs provides a more abstract, language independent representation of genitives associated with nominalized verbs. Taken together, the two approaches represent a step towards providing uniformly applicable treatments for differing languages, thus lightening the burden for machine translation.<|reference_end|>
arxiv
@article{butt1996syntactic, title={Syntactic Analyses for Parallel Grammars: Auxiliaries and Genitive NPs}, author={Miriaam Butt, Christian Fortman, and Christian Rohrer (University of Stuttgart)}, journal={arXiv preprint arXiv:cmp-lg/9604013}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604013}, primaryClass={cmp-lg cs.CL} }
butt1996syntactic
arxiv-668768
cmp-lg/9604014
The importance of being lazy -- using lazy evaluation to process queries to HPSG grammars
<|reference_start|>The importance of being lazy -- using lazy evaluation to process queries to HPSG grammars: Linguistic theories formulated in the architecture of {\sc hpsg} can be very precise and explicit since {\sc hpsg} provides a formally well-defined setup. However, when querying a faithful implementation of such an explicit theory, the large data structures specified can make it hard to see the relevant aspects of the reply given by the system. Furthermore, the system spends much time applying constraints which can never fail just to be able to enumerate specific answers. In this paper we want to describe lazy evaluation as the result of an off-line compilation technique. This method of evaluation can be used to answer queries to an {\sc hpsg} system so that only the relevant aspects are checked and output.<|reference_end|>
arxiv
@article{götz1996the, title={The importance of being lazy -- using lazy evaluation to process queries to HPSG grammars}, author={Thilo G"otz and Walt Detmar Meurers (Univ. of T"ubingen, Germany)}, journal={arXiv preprint arXiv:cmp-lg/9604014}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604014}, primaryClass={cmp-lg cs.CL} }
götz1996the
arxiv-668769
cmp-lg/9604015
Computing Prosodic Morphology
<|reference_start|>Computing Prosodic Morphology: This paper establishes a framework under which various aspects of prosodic morphology, such as templatic morphology and infixation, can be handled under two-level theory using an implemented multi-tape two-level model. The paper provides a new computational analysis of root-and-pattern morphology based on prosody.<|reference_end|>
arxiv
@article{kiraz1996computing, title={Computing Prosodic Morphology}, author={George Anton Kiraz (University of Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9604015}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604015}, primaryClass={cmp-lg cs.CL} }
kiraz1996computing
arxiv-668770
cmp-lg/9604016
Processing Metonymy: a Domain-Model Heuristic Graph Traversal Approach
<|reference_start|>Processing Metonymy: a Domain-Model Heuristic Graph Traversal Approach: We address here the treatment of metonymic expressions from a knowledge representation perspective, that is, in the context of a text understanding system which aims to build a conceptual representation from texts according to a domain model expressed in a knowledge representation formalism. We focus in this paper on the part of the semantic analyser which deals with semantic composition. We explain how we use the domain model to handle metonymy dynamically, and more generally, to underlie semantic composition, using the knowledge descriptions attached to each concept of our ontology as a kind of concept-level, multiple-role qualia structure. We rely for this on a heuristic path search algorithm that exploits the graphic aspects of the conceptual graphs formalism. The methods described have been implemented and applied on French texts in the medical domain.<|reference_end|>
arxiv
@article{bouaud1996processing, title={Processing Metonymy: a Domain-Model Heuristic Graph Traversal Approach}, author={Jacques Bouaud (1), Bruno Bachimont (1), Pierre Zweigenbaum (1) ((1) DIAM: SIM/AP-HP & Dept de Biomath'ematiques, Universit'e Paris 6)}, journal={arXiv preprint arXiv:cmp-lg/9604016}, year={1996}, number={DIAM RI-95-159}, archivePrefix={arXiv}, eprint={cmp-lg/9604016}, primaryClass={cmp-lg cs.CL} }
bouaud1996processing
arxiv-668771
cmp-lg/9604017
Fast Parsing using Pruning and Grammar Specialization
<|reference_start|>Fast Parsing using Pruning and Grammar Specialization: We show how a general grammar may be automatically adapted for fast parsing of utterances from a specific domain by means of constituent pruning and grammar specialization based on explanation-based learning. These methods together give an order of magnitude increase in speed, and the coverage loss entailed by grammar specialization is reduced to approximately half that reported in previous work. Experiments described here suggest that the loss of coverage has been reduced to the point where it no longer causes significant performance degradation in the context of a real application.<|reference_end|>
arxiv
@article{rayner1996fast, title={Fast Parsing using Pruning and Grammar Specialization}, author={Manny Rayner and David Carter (SRI International, Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9604017}, year={1996}, number={CRC-060}, archivePrefix={arXiv}, eprint={cmp-lg/9604017}, primaryClass={cmp-lg cs.CL} }
rayner1996fast
arxiv-668772
cmp-lg/9604018
The Measure of a Model
<|reference_start|>The Measure of a Model: This paper describes measures for evaluating the three determinants of how well a probabilistic classifier performs on a given test set. These determinants are the appropriateness, for the test set, of the results of (1) feature selection, (2) formulation of the parametric form of the model, and (3) parameter estimation. These are part of any model formulation procedure, even if not broken out as separate steps, so the tradeoffs explored in this paper are relevant to a wide variety of methods. The measures are demonstrated in a large experiment, in which they are used to analyze the results of roughly 300 classifiers that perform word-sense disambiguation.<|reference_end|>
arxiv
@article{bruce1996the, title={The Measure of a Model}, author={Rebecca Bruce (Southern Methodist University), Janyce Wiebe (New Mexico State University), and Ted Pedersen (Southern Methodist University)}, journal={In Proceedings of the Empirical Methods in Natural Language Processing Conference, May 1996, Philadelphia, PA}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604018}, primaryClass={cmp-lg cs.CL} }
bruce1996the
arxiv-668773
cmp-lg/9604019
Magic for Filter Optimization in Dynamic Bottom-up Processing
<|reference_start|>Magic for Filter Optimization in Dynamic Bottom-up Processing: Off-line compilation of logic grammars using Magic allows an incorporation of filtering into the logic underlying the grammar. The explicit definite clause characterization of filtering resulting from Magic compilation allows processor independent and logically clean optimizations of dynamic bottom-up processing with respect to goal-directedness. Two filter optimizations based on the program transformation technique of Unfolding are discussed which are of practical and theoretical interest.<|reference_end|>
arxiv
@article{minnen1996magic, title={Magic for Filter Optimization in Dynamic Bottom-up Processing}, author={Guido Minnen (SFB 340, Univ. of Tuebingen)}, journal={Proceedings of ACL 96, Santa Cruz, USA, June 23-28}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604019}, primaryClass={cmp-lg cs.CL} }
minnen1996magic
arxiv-668774
cmp-lg/9604020
Translating into Free Word Order Languages
<|reference_start|>Translating into Free Word Order Languages: In this paper, I discuss machine translation of English text into Turkish, a relatively ``free'' word order language. I present algorithms that determine the topic and the focus of each target sentence (using salience (Centering Theory), old vs. new information, and contrastiveness in the discourse model) in order to generate the contextually appropriate word orders in the target language.<|reference_end|>
arxiv
@article{hoffman1996translating, title={Translating into Free Word Order Languages}, author={Beryl Hoffman (University of Edinburgh)}, journal={arXiv preprint arXiv:cmp-lg/9604020}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604020}, primaryClass={cmp-lg cs.CL} }
hoffman1996translating
arxiv-668775
cmp-lg/9604021
Extended Dependency Structures and their Formal Interpretation
<|reference_start|>Extended Dependency Structures and their Formal Interpretation: We describe two ``semantically-oriented'' dependency-structure formalisms, U-forms and S-forms. U-forms have been previously used in machine translation as interlingual representations, but without being provided with a formal interpretation. S-forms, which we introduce in this paper, are a scoped version of U-forms, and we define a compositional semantics mechanism for them. Two types of semantic composition are basic: complement incorporation and modifier incorporation. Binding of variables is done at the time of incorporation, permitting much flexibility in composition order and a simple account of the semantic effects of permuting several incorporations.<|reference_end|>
arxiv
@article{dymetman1996extended, title={Extended Dependency Structures and their Formal Interpretation}, author={Marc Dymetman and Max Copperman (Rank Xerox Research Centre, Grenoble)}, journal={arXiv preprint arXiv:cmp-lg/9604021}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604021}, primaryClass={cmp-lg cs.CL} }
dymetman1996extended
arxiv-668776
cmp-lg/9604022
Unsupervised Learning of Word-Category Guessing Rules
<|reference_start|>Unsupervised Learning of Word-Category Guessing Rules: Words unknown to the lexicon present a substantial problem to part-of-speech tagging. In this paper we present a technique for fully unsupervised statistical acquisition of rules which guess possible parts-of-speech for unknown words. Three complementary sets of word-guessing rules are induced from the lexicon and a raw corpus: prefix morphological rules, suffix morphological rules and ending-guessing rules. The learning was performed on the Brown Corpus data and rule-sets, with a highly competitive performance, were produced and compared with the state-of-the-art.<|reference_end|>
arxiv
@article{mikheev1996unsupervised, title={Unsupervised Learning of Word-Category Guessing Rules}, author={Andrei Mikheev (HCRC, Edinburgh University)}, journal={arXiv preprint arXiv:cmp-lg/9604022}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604022}, primaryClass={cmp-lg cs.CL} }
mikheev1996unsupervised
arxiv-668777
cmp-lg/9604023
A Model-Theoretic Framework for Theories of Syntax
<|reference_start|>A Model-Theoretic Framework for Theories of Syntax: A natural next step in the evolution of constraint-based grammar formalisms from rewriting formalisms is to abstract fully away from the details of the grammar mechanism---to express syntactic theories purely in terms of the properties of the class of structures they license. By focusing on the structural properties of languages rather than on mechanisms for generating or checking structures that exhibit those properties, this model-theoretic approach can offer simpler and significantly clearer expression of theories and can potentially provide a uniform formalization, allowing disparate theories to be compared on the basis of those properties. We discuss $\LKP$, a monadic second-order logical framework for such an approach to syntax that has the distinctive virtue of being superficially expressive---supporting direct statement of most linguistically significant syntactic properties---but having well-defined strong generative capacity---languages are definable in $\LKP$ iff they are strongly context-free. We draw examples from the realms of GPSG and GB.<|reference_end|>
arxiv
@article{rogers1996a, title={A Model-Theoretic Framework for Theories of Syntax}, author={James Rogers (Institute for Research in Cognitive Science, University of Pennsylvania)}, journal={arXiv preprint arXiv:cmp-lg/9604023}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604023}, primaryClass={cmp-lg cs.CL} }
rogers1996a
arxiv-668778
cmp-lg/9604024
Connectivity in Bag Generation
<|reference_start|>Connectivity in Bag Generation: This paper presents a pruning technique which can be used to reduce the number of paths searched in rule-based bag generators of the type proposed by \cite{poznanskietal95} and \cite{popowich95}. Pruning the search space in these generators is important given the computational cost of bag generation. The technique relies on a connectivity constraint between the semantic indices associated with each lexical sign in a bag. Testing the algorithm on a range of sentences shows reductions in the generation time and the number of edges constructed.<|reference_end|>
arxiv
@article{trujillo1996connectivity, title={Connectivity in Bag Generation}, author={Arturo Trujillo, Simon Berry (The Robert Gordon University, Aberdeen)}, journal={arXiv preprint arXiv:cmp-lg/9604024}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604024}, primaryClass={cmp-lg cs.CL} }
trujillo1996connectivity
arxiv-668779
cmp-lg/9604025
Learning Part-of-Speech Guessing Rules from Lexicon: Extension to Non-Concatenative Operations
<|reference_start|>Learning Part-of-Speech Guessing Rules from Lexicon: Extension to Non-Concatenative Operations: One of the problems in part-of-speech tagging of real-word texts is that of unknown to the lexicon words. In Mikheev (ACL-96 cmp-lg/9604022), a technique for fully unsupervised statistical acquisition of rules which guess possible parts-of-speech for unknown words was proposed. One of the over-simplification assumed by this learning technique was the acquisition of morphological rules which obey only simple concatenative regularities of the main word with an affix. In this paper we extend this technique to the non-concatenative cases of suffixation and assess the gain in the performance.<|reference_end|>
arxiv
@article{mikheev1996learning, title={Learning Part-of-Speech Guessing Rules from Lexicon: Extension to Non-Concatenative Operations}, author={Andrei Mikheev (HCRC, Edinburgh University)}, journal={arXiv preprint arXiv:cmp-lg/9604025}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604025}, primaryClass={cmp-lg cs.CL} }
mikheev1996learning
arxiv-668780
cmp-lg/9604026
Towards a Workbench for Acquisition of Domain Knowledge from Natural Language
<|reference_start|>Towards a Workbench for Acquisition of Domain Knowledge from Natural Language: In this paper we describe an architecture and functionality of main components of a workbench for an acquisition of domain knowledge from large text corpora. The workbench supports an incremental process of corpus analysis starting from a rough automatic extraction and organization of lexico-semantic regularities and ending with a computer supported analysis of extracted data and a semi-automatic refinement of obtained hypotheses. For doing this the workbench employs methods from computational linguistics, information retrieval and knowledge engineering. Although the workbench is currently under implementation some of its components are already implemented and their performance is illustrated with samples from engineering for a medical domain.<|reference_end|>
arxiv
@article{mikheev1996towards, title={Towards a Workbench for Acquisition of Domain Knowledge from Natural Language}, author={Andrei Mikheev (HCRC, Edinburgh University) Steven Finch (Thomson Technical Labs, Rockville Maryland)}, journal={arXiv preprint arXiv:cmp-lg/9604026}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9604026}, primaryClass={cmp-lg cs.CL} }
mikheev1996towards
arxiv-668781
cmp-lg/9605001
Compiling a Partition-Based Two-Level Formalism
<|reference_start|>Compiling a Partition-Based Two-Level Formalism: This paper describes an algorithm for the compilation of a two (or more) level orthographic or phonological rule notation into finite state transducers. The notation is an alternative to the standard one deriving from Koskenniemi's work: it is believed to have some practical descriptive advantages, and is quite widely used, but has a different interpretation. Efficient interpreters exist for the notation, but until now it has not been clear how to compile to equivalent automata in a transparent way. The present paper shows how to do this, using some of the conceptual tools provided by Kaplan and Kay's regular relations calculus.<|reference_end|>
arxiv
@article{grimley-evans1996compiling, title={Compiling a Partition-Based Two-Level Formalism}, author={Edmund Grimley-Evans, George Anton Kiraz, and Stephen G. Pulman (University of Cambridge)}, journal={arXiv preprint arXiv:cmp-lg/9605001}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605001}, primaryClass={cmp-lg cs.CL} }
grimley-evans1996compiling
arxiv-668782
cmp-lg/9605002
Building Natural-Language Generation Systems
<|reference_start|>Building Natural-Language Generation Systems: This is a very short paper that briefly discusses some of the tasks that NLG systems perform. It is of no research interest, but I have occasionally found it useful as a way of introducing NLG to potential project collaborators who know nothing about the field.<|reference_end|>
arxiv
@article{reiter1996building, title={Building Natural-Language Generation Systems}, author={Ehud Reiter (University of Aberdeen, UK)}, journal={arXiv preprint arXiv:cmp-lg/9605002}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605002}, primaryClass={cmp-lg cs.CL} }
reiter1996building
arxiv-668783
cmp-lg/9605003
Yet Another Paper about Partial Verb Phrase Fronting in German
<|reference_start|>Yet Another Paper about Partial Verb Phrase Fronting in German: I describe a very simple HPSG analysis for partial verb phrase fronting. I will argue that the presented account is more adequate than others made during the past years because it allows the description of constituents in fronted positions with their modifier remaining in the non-fronted part of the sentence. A problem with ill-formed signs that are admitted by all HPSG accounts for partial verb phrase fronting known so far will be explained and a solution will be suggested that uses the difference between combinatoric relations of signs and their representation in word order domains.<|reference_end|>
arxiv
@article{müller1996yet, title={Yet Another Paper about Partial Verb Phrase Fronting in German}, author={Stefan M"uller (Lehrstuhl Computerlinguistik, Humboldt-University Berlin)}, journal={arXiv preprint arXiv:cmp-lg/9605003}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605003}, primaryClass={cmp-lg cs.CL} }
müller1996yet
arxiv-668784
cmp-lg/9605004
Higher-Order Coloured Unification and Natural Language Semantics
<|reference_start|>Higher-Order Coloured Unification and Natural Language Semantics: In this paper, we show that Higher-Order Coloured Unification - a form of unification developed for automated theorem proving - provides a general theory for modeling the interface between the interpretation process and other sources of linguistic, non semantic information. In particular, it provides the general theory for the Primary Occurrence Restriction which (Dalrymple, Shieber and Pereira, 1991)'s analysis called for.<|reference_end|>
arxiv
@article{gardent1996higher-order, title={Higher-Order Coloured Unification and Natural Language Semantics}, author={Claire Gardent and Michael Kohlhase (Universitaet des Saarlandes, Saarbruecken, Germany)}, journal={arXiv preprint arXiv:cmp-lg/9605004}, year={1996}, number={CLAUS-76}, archivePrefix={arXiv}, eprint={cmp-lg/9605004}, primaryClass={cmp-lg cs.CL} }
gardent1996higher-order
arxiv-668785
cmp-lg/9605005
Focus and Higher-Order Unification
<|reference_start|>Focus and Higher-Order Unification: Pulman has shown that Higher--Order Unification (HOU) can be used to model the interpretation of focus. In this paper, we extend the unification--based approach to cases which are often seen as a test--bed for focus theory: utterances with multiple focus operators and second occurrence expressions. We then show that the resulting analysis favourably compares with two prominent theories of focus (namely, Rooth's Alternative Semantics and Krifka's Structured Meanings theory) in that it correctly generates interpretations which these alternative theories cannot yield. Finally, we discuss the formal properties of the approach and argue that even though HOU need not terminate, for the class of unification--problems dealt with in this paper, HOU avoids this shortcoming and is in fact computationally tractable.<|reference_end|>
arxiv
@article{gardent1996focus, title={Focus and Higher-Order Unification}, author={Claire Gardent and Michael Kohlhase (Universitaet des Saarlandes, Saarbruecken, Germany)}, journal={arXiv preprint arXiv:cmp-lg/9605005}, year={1996}, number={CLAUS-75}, archivePrefix={arXiv}, eprint={cmp-lg/9605005}, primaryClass={cmp-lg cs.CL} }
gardent1996focus
arxiv-668786
cmp-lg/9605006
Active Constraints for a Direct Interpretation of HPSG
<|reference_start|>Active Constraints for a Direct Interpretation of HPSG: In this paper, we characterize the properties of a direct interpretation of HPSG and present the advantages of this approach. High-level programming languages constitute in this perspective an efficient solution: we show how a multi-paradigm approach, containing in particular constraint logic programming, offers mechanims close to that of the theory and preserves its fundamental properties. We take the example of LIFE and describe the implementation of the main HPSG mechanisms.<|reference_end|>
arxiv
@article{blache1996active, title={Active Constraints for a Direct Interpretation of HPSG}, author={Philippe Blache and Jean-Louis Paquelin (2LC-CNRS, France)}, journal={arXiv preprint arXiv:cmp-lg/9605006}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605006}, primaryClass={cmp-lg cs.CL} }
blache1996active
arxiv-668787
cmp-lg/9605007
Resolving Anaphors in Embedded Sentences
<|reference_start|>Resolving Anaphors in Embedded Sentences: We propose an algorithm to resolve anaphors, tackling mainly the problem of intrasentential antecedents. We base our methodology on the fact that such antecedents are likely to occur in embedded sentences. Sidner's focusing mechanism is used as the basic algorithm in a more complete approach. The proposed algorithm has been tested and implemented as a part of a conceptual analyser, mainly to process pronouns. Details of an evaluation are given.<|reference_end|>
arxiv
@article{azzam1996resolving, title={Resolving Anaphors in Embedded Sentences}, author={Saliha Azzam (Sheffield University)}, journal={arXiv preprint arXiv:cmp-lg/9605007}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605007}, primaryClass={cmp-lg cs.CL} }
azzam1996resolving
arxiv-668788
cmp-lg/9605008
Tactical Generation in a Free Constituent Order Language
<|reference_start|>Tactical Generation in a Free Constituent Order Language: This paper describes tactical generation in Turkish, a free constituent order language, in which the order of the constituents may change according to the information structure of the sentences to be generated. In the absence of any information regarding the information structure of a sentence (i.e., topic, focus, background, etc.), the constituents of the sentence obey a default order, but the order is almost freely changeable, depending on the constraints of the text flow or discourse. We have used a recursively structured finite state machine for handling the changes in constituent order, implemented as a right-linear grammar backbone. Our implementation environment is the GenKit system, developed at Carnegie Mellon University--Center for Machine Translation. Morphological realization has been implemented using an external morphological analysis/generation component which performs concrete morpheme selection and handles morphographemic processes.<|reference_end|>
arxiv
@article{hakkani1996tactical, title={Tactical Generation in a Free Constituent Order Language}, author={Dilek Zeynep Hakkani, Kemal Oflazer, Ilyas Cicekli}, journal={Proceedings of 1996 International Workshop on Natural Language Generation}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605008}, primaryClass={cmp-lg cs.CL} }
hakkani1996tactical
arxiv-668789
cmp-lg/9605009
Learning similarity-based word sense disambiguation from sparse data
<|reference_start|>Learning similarity-based word sense disambiguation from sparse data: We describe a method for automatic word sense disambiguation using a text corpus and a machine-readable dictionary (MRD). The method is based on word similarity and context similarity measures. Words are considered similar if they appear in similar contexts; contexts are similar if they contain similar words. The circularity of this definition is resolved by an iterative, converging process, in which the system learns from the corpus a set of typical usages for each of the senses of the polysemous word listed in the MRD. A new instance of a polysemous word is assigned the sense associated with the typical usage most similar to its context. Experiments show that this method performs well, and can learn even from very sparse training data.<|reference_end|>
arxiv
@article{karov1996learning, title={Learning similarity-based word sense disambiguation from sparse data}, author={Yael Karov and Shimon Edelman (The Weizmann Institute of Science)}, journal={arXiv preprint arXiv:cmp-lg/9605009}, year={1996}, number={Weizmann CS-TR 96-05, March 1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605009}, primaryClass={cmp-lg cs.CL} }
karov1996learning
arxiv-668790
cmp-lg/9605010
Best-First Surface Realization
<|reference_start|>Best-First Surface Realization: Current work in surface realization concentrates on the use of general, abstract algorithms that interpret large, reversible grammars. Only little attention has been paid so far to the many small and simple applications that require coverage of a small sublanguage at different degrees of sophistication. The system TG/2 described in this paper can be smoothly integrated with deep generation processes, it integrates canned text, templates, and context-free rules into a single formalism, it allows for both textual and tabular output, and it can be parameterized according to linguistic preferences. These features are based on suitably restricted production system techniques and on a generic backtracking regime.<|reference_end|>
arxiv
@article{busemann1996best-first, title={Best-First Surface Realization}, author={Stephan Busemann (DFKI GmbH)}, journal={arXiv preprint arXiv:cmp-lg/9605010}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605010}, primaryClass={cmp-lg cs.CL} }
busemann1996best-first
arxiv-668791
cmp-lg/9605011
Counting Coordination Categorially
<|reference_start|>Counting Coordination Categorially: This paper presents a way of reducing the complexity of parsing free coordination. It lives on the Coordinative Count Invariant, a property of derivable sequences in occurrence-sensitive categorial grammar. This invariant can be exploited to cut down deterministically the search space for coordinated sentences to minimal fractions. The invariant is based on inequalities, which is shown to be the best one can get in the presence of coordination without proper parsing. It is implemented in a categorial parser for Dutch. Some results of applying the invariant to the parsing of coordination in this parser are presented.<|reference_end|>
arxiv
@article{cremers1996counting, title={Counting Coordination Categorially}, author={Crit Cremers and Maarten Hijzelendoorn}, journal={arXiv preprint arXiv:cmp-lg/9605011}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605011}, primaryClass={cmp-lg cs.CL} }
cremers1996counting
arxiv-668792
cmp-lg/9605012
A New Statistical Parser Based on Bigram Lexical Dependencies
<|reference_start|>A New Statistical Parser Based on Bigram Lexical Dependencies: This paper describes a new statistical parser which is based on probabilities of dependencies between head-words in the parse tree. Standard bigram probability estimation techniques are extended to calculate probabilities of dependencies between pairs of words. Tests using Wall Street Journal data show that the method performs at least as well as SPATTER (Magerman 95, Jelinek et al 94), which has the best published results for a statistical parser on this task. The simplicity of the approach means the model trains on 40,000 sentences in under 15 minutes. With a beam search strategy parsing speed can be improved to over 200 sentences a minute with negligible loss in accuracy.<|reference_end|>
arxiv
@article{collins1996a, title={A New Statistical Parser Based on Bigram Lexical Dependencies}, author={Michael Collins (University of Pennsylvania)}, journal={arXiv preprint arXiv:cmp-lg/9605012}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605012}, primaryClass={cmp-lg cs.CL} }
collins1996a
arxiv-668793
cmp-lg/9605013
Learning Dependencies between Case Frame Slots
<|reference_start|>Learning Dependencies between Case Frame Slots: We address the problem of automatically acquiring case frame patterns (selectional patterns) from large corpus data. In particular, we propose a method of learning dependencies between case frame slots. We view the problem of learning case frame patterns as that of learning multi-dimensional discrete joint distributions, where random variables represent case slots. We then formalize the dependencies between case slots as the probabilistic dependencies between these random variables. Since the number of parameters in a multi-dimensional joint distribution is exponential, it is infeasible to accurately estimate them in practice. To overcome this difficulty, we settle with approximating the target joint distribution by the product of low order component distributions, based on corpus data. In particular we propose to employ an efficient learning algorithm based on the MDL principle to realize this task. Our experimental results indicate that for certain classes of verbs, the accuracy achieved in a disambiguation experiment is improved by using the acquired knowledge of dependencies.<|reference_end|>
arxiv
@article{li1996learning, title={Learning Dependencies between Case Frame Slots}, author={Hang Li and Naoki Abe (Theory NEC Lab., RWCP)}, journal={arXiv preprint arXiv:cmp-lg/9605013}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605013}, primaryClass={cmp-lg cs.CL} }
li1996learning
arxiv-668794
cmp-lg/9605014
Clustering Words with the MDL Principle
<|reference_start|>Clustering Words with the MDL Principle: We address the problem of automatically constructing a thesaurus (hierarchically clustering words) based on corpus data. We view the problem of clustering words as that of estimating a joint distribution over the Cartesian product of a partition of a set of nouns and a partition of a set of verbs, and propose an estimation algorithm using simulated annealing with an energy function based on the Minimum Description Length (MDL) Principle. We empirically compared the performance of our method based on the MDL Principle against that of one based on the Maximum Likelihood Estimator, and found that the former outperforms the latter. We also evaluated the method by conducting pp-attachment disambiguation experiments using an automatically constructed thesaurus. Our experimental results indicate that we can improve accuracy in disambiguation by using such a thesaurus.<|reference_end|>
arxiv
@article{li1996clustering, title={Clustering Words with the MDL Principle}, author={Hang Li and Naoki Abe (Theory NEC Lab., RWCP)}, journal={arXiv preprint arXiv:cmp-lg/9605014}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605014}, primaryClass={cmp-lg cs.CL} }
li1996clustering
arxiv-668795
cmp-lg/9605015
Adapting the Core Language Engine to French and Spanish
<|reference_start|>Adapting the Core Language Engine to French and Spanish: We describe how substantial domain-independent language-processing systems for French and Spanish were quickly developed by manually adapting an existing English-language system, the SRI Core Language Engine. We explain the adaptation process in detail, and argue that it provides a fairly general recipe for converting a grammar-based system for English into a corresponding one for a Romance language.<|reference_end|>
arxiv
@article{rayner1996adapting, title={Adapting the Core Language Engine to French and Spanish}, author={Manny Rayner and David Carter (SRI International, Cambridge), and Pierrette Bouillon (ISSCO, Geneva)}, journal={arXiv preprint arXiv:cmp-lg/9605015}, year={1996}, number={CRC-061}, archivePrefix={arXiv}, eprint={cmp-lg/9605015}, primaryClass={cmp-lg cs.CL} }
rayner1996adapting
arxiv-668796
cmp-lg/9605016
Parsing for Semidirectional Lambek Grammar is NP-Complete
<|reference_start|>Parsing for Semidirectional Lambek Grammar is NP-Complete: We study the computational complexity of the parsing problem of a variant of Lambek Categorial Grammar that we call {\em semidirectional}. In semidirectional Lambek calculus $\SDL$ there is an additional non-directional abstraction rule allowing the formula abstracted over to appear anywhere in the premise sequent's left-hand side, thus permitting non-peripheral extraction. $\SDL$ grammars are able to generate each context-free language and more than that. We show that the parsing problem for semidirectional Lambek Grammar is NP-complete by a reduction of the 3-Partition problem.<|reference_end|>
arxiv
@article{doerre1996parsing, title={Parsing for Semidirectional Lambek Grammar is NP-Complete}, author={Jochen Doerre (Univ. Stuttgart)}, journal={Proceedings ACL '96 (Santa Cruz)}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605016}, primaryClass={cmp-lg cs.CL} }
doerre1996parsing
arxiv-668797
cmp-lg/9605017
A Chart Generator for Shake and Bake Machine Translation
<|reference_start|>A Chart Generator for Shake and Bake Machine Translation: A generation algorithm based on an active chart parsing algorithm is introduced which can be used in conjunction with a Shake and Bake machine translation system. A concise Prolog implementation of the algorithm is provided, and some performance comparisons with a shift-reduce based algorithm are given which show the chart generator is much more efficient for generating all possible sentences from an input specification.<|reference_end|>
arxiv
@article{popowich1996a, title={A Chart Generator for Shake and Bake Machine Translation}, author={Fred Popowich (Simon Fraser University, Canada)}, journal={arXiv preprint arXiv:cmp-lg/9605017}, year={1996}, number={CMPT TR 95-08}, archivePrefix={arXiv}, eprint={cmp-lg/9605017}, primaryClass={cmp-lg cs.CL} }
popowich1996a
arxiv-668798
cmp-lg/9605018
Efficient Tabular LR Parsing
<|reference_start|>Efficient Tabular LR Parsing: We give a new treatment of tabular LR parsing, which is an alternative to Tomita's generalized LR algorithm. The advantage is twofold. Firstly, our treatment is conceptually more attractive because it uses simpler concepts, such as grammar transformations and standard tabulation techniques also know as chart parsing. Secondly, the static and dynamic complexity of parsing, both in space and time, is significantly reduced.<|reference_end|>
arxiv
@article{nederhof1996efficient, title={Efficient Tabular LR Parsing}, author={Mark-Jan Nederhof (University of Groningen) and Giorgio Satta (University of Padua)}, journal={Proceedings ACL '96 (Santa Cruz)}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605018}, primaryClass={cmp-lg cs.CL} }
nederhof1996efficient
arxiv-668799
cmp-lg/9605019
Noun-Phrase Analysis in Unrestricted Text for Information Retrieval
<|reference_start|>Noun-Phrase Analysis in Unrestricted Text for Information Retrieval: Information retrieval is an important application area of natural-language processing where one encounters the genuine challenge of processing large quantities of unrestricted natural-language text. This paper reports on the application of a few simple, yet robust and efficient noun-phrase analysis techniques to create better indexing phrases for information retrieval. In particular, we describe a hybrid approach to the extraction of meaningful (continuous or discontinuous) subcompounds from complex noun phrases using both corpus statistics and linguistic heuristics. Results of experiments show that indexing based on such extracted subcompounds improves both recall and precision in an information retrieval system. The noun-phrase analysis techniques are also potentially useful for book indexing and automatic thesaurus extraction.<|reference_end|>
arxiv
@article{evans1996noun-phrase, title={Noun-Phrase Analysis in Unrestricted Text for Information Retrieval}, author={David A. Evans and Chengxiang Zhai (Carnegie Mellon University)}, journal={Proceedings of the 34th Annual Meeting of Association for Computational Linguistics, Santa Cruz, California, June 24-28, 1996. 17-24.}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605019}, primaryClass={cmp-lg cs.CL} }
evans1996noun-phrase
arxiv-668800
cmp-lg/9605020
Where Defaults Don't Help: the Case of the German Plural System
<|reference_start|>Where Defaults Don't Help: the Case of the German Plural System: The German plural system has become a focal point for conflicting theories of language, both linguistic and cognitive. We present simulation results with three simple classifiers - an ordinary nearest neighbour algorithm, Nosofsky's `Generalized Context Model' (GCM) and a standard, three-layer backprop network - predicting the plural class from a phonological representation of the singular in German. Though these are absolutely `minimal' models, in terms of architecture and input information, they nevertheless do remarkably well. The nearest neighbour predicts the correct plural class with an accuracy of 72% for a set of 24,640 nouns from the CELEX database. With a subset of 8,598 (non-compound) nouns, the nearest neighbour, the GCM and the network score 71.0%, 75.0% and 83.5%, respectively, on novel items. Furthermore, they outperform a hybrid, `pattern-associator + default rule', model, as proposed by Marcus et al. (1995), on this data set.<|reference_end|>
arxiv
@article{nakisa1996where, title={Where Defaults Don't Help: the Case of the German Plural System}, author={Ramin Charles Nakisa and Ulrike Hahn (Oxford University)}, journal={arXiv preprint arXiv:cmp-lg/9605020}, year={1996}, archivePrefix={arXiv}, eprint={cmp-lg/9605020}, primaryClass={cmp-lg cs.CL} }
nakisa1996where