corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-668301 | cmp-lg/9405005 | Pearl: A Probabilistic Chart Parser | <|reference_start|>Pearl: A Probabilistic Chart Parser: This paper describes a natural language parsing algorithm for unrestricted text which uses a probability-based scoring function to select the "best" parse of a sentence. The parser, Pearl, is a time-asynchronous bottom-up chart parser with Earley-type top-down prediction which pursues the highest-scoring theory in the chart, where the score of a theory represents the extent to which the context of the sentence predicts that interpretation. This parser differs from previous attempts at stochastic parsers in that it uses a richer form of conditional probabilities based on context to predict likelihood. Pearl also provides a framework for incorporating the results of previous work in part-of-speech assignment, unknown word models, and other probabilistic models of linguistic features into one parsing tool, interleaving these techniques instead of using the traditional pipeline architecture. In preliminary tests, Pearl has been successful at resolving part-of-speech and word (in speech processing) ambiguity, determining categories for unknown words, and selecting correct parses first using a very loosely fitting covering grammar.<|reference_end|> | arxiv | @article{magerman1994pearl:,
title={Pearl: A Probabilistic Chart Parser},
author={David M. Magerman and Mitchell P. Marcus},
journal={Proceedings, 2nd International Workshop for Parsing Technologies},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405005},
primaryClass={cmp-lg cs.CL}
} | magerman1994pearl: |
arxiv-668302 | cmp-lg/9405006 | Efficiency, Robustness, and Accuracy in Picky Chart Parsing | <|reference_start|>Efficiency, Robustness, and Accuracy in Picky Chart Parsing: This paper describes Picky, a probabilistic agenda-based chart parsing algorithm which uses a technique called {\em probabilistic prediction} to predict which grammar rules are likely to lead to an acceptable parse of the input. Using a suboptimal search method, Picky significantly reduces the number of edges produced by CKY-like chart parsing algorithms, while maintaining the robustness of pure bottom-up parsers and the accuracy of existing probabilistic parsers. Experiments using Picky demonstrate how probabilistic modelling can impact upon the efficiency, robustness and accuracy of a parser.<|reference_end|> | arxiv | @article{magerman1994efficiency,,
title={Efficiency, Robustness, and Accuracy in Picky Chart Parsing},
author={David M. Magerman and Carl Weir},
journal={Proceedings, ACL 1992},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405006},
primaryClass={cmp-lg cs.CL}
} | magerman1994efficiency, |
arxiv-668303 | cmp-lg/9405007 | Towards History-based Grammars: Using Richer Models for Probabilistic Parsing | <|reference_start|>Towards History-based Grammars: Using Richer Models for Probabilistic Parsing: We describe a generative probabilistic model of natural language, which we call HBG, that takes advantage of detailed linguistic information to resolve ambiguity. HBG incorporates lexical, syntactic, semantic, and structural information from the parse tree into the disambiguation process in a novel way. We use a corpus of bracketed sentences, called a Treebank, in combination with decision tree building to tease out the relevant aspects of a parse tree that will determine the correct parse of a sentence. This stands in contrast to the usual approach of further grammar tailoring via the usual linguistic introspection in the hope of generating the correct parse. In head-to-head tests against one of the best existing robust probabilistic parsing models, which we call P-CFG, the HBG model significantly outperforms P-CFG, increasing the parsing accuracy rate from 60% to 75%, a 37% reduction in error.<|reference_end|> | arxiv | @article{black1994towards,
title={Towards History-based Grammars: Using Richer Models for Probabilistic
Parsing},
author={Ezra Black, Fred Jelinek, John Lafferty, David M. Magerman, Robert
Mercer, and Salim Roukos},
journal={Proceedings, DARPA Speech and Natural Language Workshop, 1992},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405007},
primaryClass={cmp-lg cs.CL}
} | black1994towards |
arxiv-668304 | cmp-lg/9405008 | A Stochastic Finite-State Word-Segmentation Algorithm for Chinese | <|reference_start|>A Stochastic Finite-State Word-Segmentation Algorithm for Chinese: We present a stochastic finite-state model for segmenting Chinese text into dictionary entries and productively derived words, and providing pronunciations for these words; the method incorporates a class-based model in its treatment of personal names. We also evaluate the system's performance, taking into account the fact that people often do not agree on a single segmentation.<|reference_end|> | arxiv | @article{sproat1994a,
title={A Stochastic Finite-State Word-Segmentation Algorithm for Chinese},
author={Richard Sproat (AT&T Bell Laboratories), Chilin Shih (AT&T Bell
Laboratories), William Gale (AT&T Bell Laboratories), and Nancy Chang
(Harvard University)},
journal={in Proceedings of ACL 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405008},
primaryClass={cmp-lg cs.CL}
} | sproat1994a |
arxiv-668305 | cmp-lg/9405009 | Natural Language Parsing as Statistical Pattern Recognition | <|reference_start|>Natural Language Parsing as Statistical Pattern Recognition: Traditional natural language parsers are based on rewrite rule systems developed in an arduous, time-consuming manner by grammarians. A majority of the grammarian's efforts are devoted to the disambiguation process, first hypothesizing rules which dictate constituent categories and relationships among words in ambiguous sentences, and then seeking exceptions and corrections to these rules. In this work, I propose an automatic method for acquiring a statistical parser from a set of parsed sentences which takes advantage of some initial linguistic input, but avoids the pitfalls of the iterative and seemingly endless grammar development process. Based on distributionally-derived and linguistically-based features of language, this parser acquires a set of statistical decision trees which assign a probability distribution on the space of parse trees given the input sentence. These decision trees take advantage of significant amount of contextual information, potentially including all of the lexical information in the sentence, to produce highly accurate statistical models of the disambiguation process. By basing the disambiguation criteria selection on entropy reduction rather than human intuition, this parser development method is able to consider more sentences than a human grammarian can when making individual disambiguation rules. In experiments between a parser, acquired using this statistical framework, and a grammarian's rule-based parser, developed over a ten-year period, both using the same training material and test sentences, the decision tree parser significantly outperformed the grammar-based parser on the accuracy measure which the grammarian was trying to maximize, achieving an accuracy of 78% compared to the grammar-based parser's 69%.<|reference_end|> | arxiv | @article{magerman1994natural,
title={Natural Language Parsing as Statistical Pattern Recognition},
author={David M. Magerman},
journal={arXiv preprint arXiv:cmp-lg/9405009},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405009},
primaryClass={cmp-lg cs.CL}
} | magerman1994natural |
arxiv-668306 | cmp-lg/9405010 | Common Topics and Coherent Situations: Interpreting Ellipsis in the Context of Discourse Inference | <|reference_start|>Common Topics and Coherent Situations: Interpreting Ellipsis in the Context of Discourse Inference: It is claimed that a variety of facts concerning ellipsis, event reference, and interclausal coherence can be explained by two features of the linguistic form in question: (1) whether the form leaves behind an empty constituent in the syntax, and (2) whether the form is anaphoric in the semantics. It is proposed that these features interact with one of two types of discourse inference, namely {\it Common Topic} inference and {\it Coherent Situation} inference. The differing ways in which these types of inference utilize syntactic and semantic representations predicts phenomena for which it is otherwise difficult to account.<|reference_end|> | arxiv | @article{kehler1994common,
title={Common Topics and Coherent Situations: Interpreting Ellipsis in the
Context of Discourse Inference},
author={Andrew Kehler (Harvard University)},
journal={ACL-94, Las Cruces, New Mexico},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405010},
primaryClass={cmp-lg cs.CL}
} | kehler1994common |
arxiv-668307 | cmp-lg/9405011 | A Plan-Based Model for Response Generation in Collaborative Task-Oriented Dialogues | <|reference_start|>A Plan-Based Model for Response Generation in Collaborative Task-Oriented Dialogues: This paper presents a plan-based architecture for response generation in collaborative consultation dialogues, with emphasis on cases in which the system (consultant) and user (executing agent) disagree. Our work contributes to an overall system for collaborative problem-solving by providing a plan-based framework that captures the {\em Propose-Evaluate-Modify} cycle of collaboration, and by allowing the system to initiate subdialogues to negotiate proposed additions to the shared plan and to provide support for its claims. In addition, our system handles in a unified manner the negotiation of proposed domain actions, proposed problem-solving actions, and beliefs proposed by discourse actions. Furthermore, it captures cooperative responses within the collaborative framework and accounts for why questions are sometimes never answered.<|reference_end|> | arxiv | @article{chu-carroll1994a,
title={A Plan-Based Model for Response Generation in Collaborative
Task-Oriented Dialogues},
author={Jennifer Chu-Carroll (University of Delaware) and Sandra Carberry
(University of Delaware)},
journal={arXiv preprint arXiv:cmp-lg/9405011},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405011},
primaryClass={cmp-lg cs.CL}
} | chu-carroll1994a |
arxiv-668308 | cmp-lg/9405012 | Integration Of Visual Inter-word Constraints And Linguistic Knowledge In Degraded Text Recognition | <|reference_start|>Integration Of Visual Inter-word Constraints And Linguistic Knowledge In Degraded Text Recognition: Degraded text recognition is a difficult task. Given a noisy text image, a word recognizer can be applied to generate several candidates for each word image. High-level knowledge sources can then be used to select a decision from the candidate set for each word image. In this paper, we propose that visual inter-word constraints can be used to facilitate candidate selection. Visual inter-word constraints provide a way to link word images inside the text page, and to interpret them systematically.<|reference_end|> | arxiv | @article{hong1994integration,
title={Integration Of Visual Inter-word Constraints And Linguistic Knowledge In
Degraded Text Recognition},
author={Tao Hong (Center of Excellence for Document Analysis and Recognition,
Department of Computer Science, State University of New York at Buffalo)},
journal={In Proceedings of ACL-94 (Student Session)},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405012},
primaryClass={cmp-lg cs.CL}
} | hong1994integration |
arxiv-668309 | cmp-lg/9405013 | Collaboration on reference to objects that are not mutually known | <|reference_start|>Collaboration on reference to objects that are not mutually known: In conversation, a person sometimes has to refer to an object that is not previously known to the other participant. We present a plan-based model of how agents collaborate on reference of this sort. In making a reference, an agent uses the most salient attributes of the referent. In understanding a reference, an agent determines his confidence in its adequacy as a means of identifying the referent. To collaborate, the agents use judgment, suggestion, and elaboration moves to refashion an inadequate referring expression.<|reference_end|> | arxiv | @article{edmonds1994collaboration,
title={Collaboration on reference to objects that are not mutually known},
author={Philip G. Edmonds},
journal={arXiv preprint arXiv:cmp-lg/9405013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405013},
primaryClass={cmp-lg cs.CL}
} | edmonds1994collaboration |
arxiv-668310 | cmp-lg/9405014 | Classifying Cue Phrases in Text and Speech Using Machine Learning | <|reference_start|>Classifying Cue Phrases in Text and Speech Using Machine Learning: Cue phrases may be used in a discourse sense to explicitly signal discourse structure, but also in a sentential sense to convey semantic rather than structural information. This paper explores the use of machine learning for classifying cue phrases as discourse or sentential. Two machine learning programs (Cgrendel and C4.5) are used to induce classification rules from sets of pre-classified cue phrases and their features. Machine learning is shown to be an effective technique for not only automating the generation of classification rules, but also for improving upon previous results.<|reference_end|> | arxiv | @article{litman1994classifying,
title={Classifying Cue Phrases in Text and Speech Using Machine Learning},
author={Diane J. Litman (AT&T Bell Laboratories, Murray Hill, NJ)},
journal={arXiv preprint arXiv:cmp-lg/9405014},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405014},
primaryClass={cmp-lg cs.CL}
} | litman1994classifying |
arxiv-668311 | cmp-lg/9405015 | Intention-based Segmentation: Human Reliability and Correlation with Linguistic Cues | <|reference_start|>Intention-based Segmentation: Human Reliability and Correlation with Linguistic Cues: Certain spans of utterances in a discourse, referred to here as segments, are widely assumed to form coherent units. Further, the segmental structure of discourse has been claimed to constrain and be constrained by many phenomena. However, there is weak consensus on the nature of segments and the criteria for recognizing or generating them. We present quantitative results of a two part study using a corpus of spontaneous, narrative monologues. The first part evaluates the statistical reliability of human segmentation of our corpus, where speaker intention is the segmentation criterion. We then use the subjects' segmentations to evaluate the correlation of discourse segmentation with three linguistic cues (referential noun phrases, cue words, and pauses), using information retrieval metrics.<|reference_end|> | arxiv | @article{passonneau1994intention-based,
title={Intention-based Segmentation: Human Reliability and Correlation with
Linguistic Cues},
author={Rebecca J. Passonneau (Department of Computer Science, Columbia
University, New York), Diane J. Litman (AT&T Bell Laboratories, Murray Hill,
NJ)},
journal={arXiv preprint arXiv:cmp-lg/9405015},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405015},
primaryClass={cmp-lg cs.CL}
} | passonneau1994intention-based |
arxiv-668312 | cmp-lg/9405016 | Precise n-gram Probabilities from Stochastic Context-free Grammars | <|reference_start|>Precise n-gram Probabilities from Stochastic Context-free Grammars: We present an algorithm for computing n-gram probabilities from stochastic context-free grammars, a procedure that can alleviate some of the standard problems associated with n-grams (estimation from sparse data, lack of linguistic structure, among others). The method operates via the computation of substring expectations, which in turn is accomplished by solving systems of linear equations derived from the grammar. We discuss efficient implementation of the algorithm and report our practical experience with it.<|reference_end|> | arxiv | @article{stolcke1994precise,
title={Precise n-gram Probabilities from Stochastic Context-free Grammars},
author={Andreas Stolcke (ICSI, Berkeley, CA), Jonathan Segal (ICSI, Berkeley,
CA)},
journal={Proc. ACL, pp. 74-79, June 1994},
year={1994},
doi={10.3115/981732.981743},
number={ICSI TR-94-007},
archivePrefix={arXiv},
eprint={cmp-lg/9405016},
primaryClass={cmp-lg cs.CL}
} | stolcke1994precise |
arxiv-668313 | cmp-lg/9405017 | Best-first Model Merging for Hidden Markov Model Induction | <|reference_start|>Best-first Model Merging for Hidden Markov Model Induction: This report describes a new technique for inducing the structure of Hidden Markov Models from data which is based on the general `model merging' strategy (Omohundro 1992). The process begins with a maximum likelihood HMM that directly encodes the training data. Successively more general models are produced by merging HMM states. A Bayesian posterior probability criterion is used to determine which states to merge and when to stop generalizing. The procedure may be considered a heuristic search for the HMM structure with the highest posterior probability. We discuss a variety of possible priors for HMMs, as well as a number of approximations which improve the computational efficiency of the algorithm. We studied three applications to evaluate the procedure. The first compares the merging algorithm with the standard Baum-Welch approach in inducing simple finite-state languages from small, positive-only training samples. We found that the merging procedure is more robust and accurate, particularly with a small amount of training data. The second application uses labelled speech data from the TIMIT database to build compact, multiple-pronunciation word models that can be used in speech recognition. Finally, we describe how the algorithm was incorporated in an operational speech understanding system, where it is combined with neural network acoustic likelihood estimators to improve performance over single-pronunciation word models.<|reference_end|> | arxiv | @article{stolcke1994best-first,
title={Best-first Model Merging for Hidden Markov Model Induction},
author={Andreas Stolcke (ICSI, Berkeley, CA), Stephen M. Omohundro (ICSI,
Berkeley, CA)},
journal={arXiv preprint arXiv:cmp-lg/9405017},
year={1994},
number={ICSI TR-94-003},
archivePrefix={arXiv},
eprint={cmp-lg/9405017},
primaryClass={cmp-lg cs.CL}
} | stolcke1994best-first |
arxiv-668314 | cmp-lg/9405018 | Memory-Based Lexical Acquisition and Processing | <|reference_start|>Memory-Based Lexical Acquisition and Processing: Current approaches to computational lexicology in language technology are knowledge-based (competence-oriented) and try to abstract away from specific formalisms, domains, and applications. This results in severe complexity, acquisition and reusability bottlenecks. As an alternative, we propose a particular performance-oriented approach to Natural Language Processing based on automatic memory-based learning of linguistic (lexical) tasks. The consequences of the approach for computational lexicology are discussed, and the application of the approach on a number of lexical acquisition and disambiguation tasks in phonology, morphology and syntax is described.<|reference_end|> | arxiv | @article{daelemans1994memory-based,
title={Memory-Based Lexical Acquisition and Processing},
author={Walter Daelemans},
journal={Steffens (ed.) Machine Translation & Lexion. Springer, 1995},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405018},
primaryClass={cmp-lg cs.CL}
} | daelemans1994memory-based |
arxiv-668315 | cmp-lg/9405019 | Determination of referential property and number of nouns in Japanese sentences for machine translation into English | <|reference_start|>Determination of referential property and number of nouns in Japanese sentences for machine translation into English: When translating Japanese nouns into English, we face the problem of articles and numbers which the Japanese language does not have, but which are necessary for the English composition. To solve this difficult problem we classified the referential property and the number of nouns into three types respectively. This paper shows that the referential property and the number of nouns in a sentence can be estimated fairly reliably by the words in the sentence. Many rules for the estimation were written in forms similar to rewriting rules in expert systems. We obtained the correct recognition scores of 85.5\% and 89.0\% in the estimation of the referential property and the number respectively for the sentences which were used for the construction of our rules. We tested these rules for some other texts, and obtained the scores of 68.9\% and 85.6\% respectively.<|reference_end|> | arxiv | @article{murata1994determination,
title={Determination of referential property and number of nouns in Japanese
sentences for machine translation into English},
author={Masaki Murata, Makoto Nagao},
journal={arXiv preprint arXiv:cmp-lg/9405019},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405019},
primaryClass={cmp-lg cs.CL}
} | murata1994determination |
arxiv-668316 | cmp-lg/9405020 | Capturing CFLs with Tree Adjoining Grammars | <|reference_start|>Capturing CFLs with Tree Adjoining Grammars: We define a decidable class of TAGs that is strongly equivalent to CFGs and is cubic-time parsable. This class serves to lexicalize CFGs in the same manner as the LCFGs of Schabes and Waters but with considerably less restriction on the form of the grammars. The class provides a normal form for TAGs that generate local sets in much the same way that regular grammars provide a normal form for CFGs that generate regular sets.<|reference_end|> | arxiv | @article{rogers1994capturing,
title={Capturing CFLs with Tree Adjoining Grammars},
author={James Rogers (Univ. of Delaware)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405020},
primaryClass={cmp-lg cs.CL}
} | rogers1994capturing |
arxiv-668317 | cmp-lg/9405021 | Generating Precondition Expressions in Instructional Text | <|reference_start|>Generating Precondition Expressions in Instructional Text: This study employs a knowledge intensive corpus analysis to identify the elements of the communicative context which can be used to determine the appropriate lexical and grammatical form of instructional texts. \ig, an instructional text generation system based on this analysis, is presented, particularly with reference to its expression of precondition relations.<|reference_end|> | arxiv | @article{linden1994generating,
title={Generating Precondition Expressions in Instructional Text},
author={Keith Vander Linden (ITRI, University of Brighton)},
journal={proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405021},
primaryClass={cmp-lg cs.CL}
} | linden1994generating |
arxiv-668318 | cmp-lg/9405022 | Grammar Specialization through Entropy Thresholds | <|reference_start|>Grammar Specialization through Entropy Thresholds: Explanation-based generalization is used to extract a specialized grammar from the original one using a training corpus of parse trees. This allows very much faster parsing and gives a lower error rate, at the price of a small loss in coverage. Previously, it has been necessary to specify the tree-cutting criteria (or operationality criteria) manually; here they are derived automatically from the training set and the desired coverage of the specialized grammar. This is done by assigning an entropy value to each node in the parse trees and cutting in the nodes with sufficiently high entropy values.<|reference_end|> | arxiv | @article{samuelsson1994grammar,
title={Grammar Specialization through Entropy Thresholds},
author={Christer Samuelsson (Swedish Institute of Computer Science)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405022},
primaryClass={cmp-lg cs.CL}
} | samuelsson1994grammar |
arxiv-668319 | cmp-lg/9405023 | An Integrated Heuristic Scheme for Partial Parse Evaluation | <|reference_start|>An Integrated Heuristic Scheme for Partial Parse Evaluation: GLR* is a recently developed robust version of the Generalized LR Parser, that can parse almost ANY input sentence by ignoring unrecognizable parts of the sentence. On a given input sentence, the parser returns a collection of parses that correspond to maximal, or close to maximal, parsable subsets of the original input. This paper describes recent work on developing an integrated heuristic scheme for selecting the parse that is deemed ``best'' from such a collection. We describe the heuristic measures used and their combination scheme. Preliminary results from experiments conducted on parsing speech recognized spontaneous speech are also reported.<|reference_end|> | arxiv | @article{lavie1994an,
title={An Integrated Heuristic Scheme for Partial Parse Evaluation},
author={Alon Lavie (School of Computer Science, Carnegie Mellon University)},
journal={In Proceedings of ACL-94 (student session)},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405023},
primaryClass={cmp-lg cs.CL}
} | lavie1994an |
arxiv-668320 | cmp-lg/9405024 | Abductive Equivalential Translation and its application to Natural Language Database Interfacing | <|reference_start|>Abductive Equivalential Translation and its application to Natural Language Database Interfacing: The thesis describes a logical formalization of natural-language database interfacing. We assume the existence of a ``natural language engine'' capable of mediating between surface linguistic string and their representations as ``literal'' logical forms: the focus of interest will be the question of relating ``literal'' logical forms to representations in terms of primitives meaningful to the underlying database engine. We begin by describing the nature of the problem, and show how a variety of interface functionalities can be considered as instances of a type of formal inference task which we call ``Abductive Equivalential Translation'' (AET); functionalities which can be reduced to this form include answering questions, responding to commands, reasoning about the completeness of answers, answering meta-questions of type ``Do you know...'', and generating assertions and questions. In each case, a ``linguistic domain theory'' (LDT) $\Gamma$ and an input formula $F$ are given, and the goal is to construct a formula with certain properties which is equivalent to $F$, given $\Gamma$ and a set of permitted assumptions. If the LDT is of a certain specified type, whose formulas are either conditional equivalences or Horn-clauses, we show that the AET problem can be reduced to a goal-directed inference method. We present an abstract description of this method, and sketch its realization in Prolog. The relationship between AET and several problems previously discussed in the literature is discussed. In particular, we show how AET can provide a simple and elegant solution to the so-called ``Doctor on Board'' problem, and in effect allows a ``relativization'' of the Closed World Assumption. The ideas in the thesis have all been implemented concretely within the SRI CLARE project, using a real projects and payments database. The LDT for the example database is described in detail, and examples of the types of functionality that can be achieved within the example domain are presented.<|reference_end|> | arxiv | @article{rayner1994abductive,
title={Abductive Equivalential Translation and its application to Natural
Language Database Interfacing},
author={Manny Rayner},
journal={arXiv preprint arXiv:cmp-lg/9405024},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405024},
primaryClass={cmp-lg cs.CL}
} | rayner1994abductive |
arxiv-668321 | cmp-lg/9405025 | An Optimal Tabular Parsing Algorithm | <|reference_start|>An Optimal Tabular Parsing Algorithm: In this paper we relate a number of parsing algorithms which have been developed in very different areas of parsing theory, and which include deterministic algorithms, tabular algorithms, and a parallel algorithm. We show that these algorithms are based on the same underlying ideas. By relating existing ideas, we hope to provide an opportunity to improve some algorithms based on features of others. A second purpose of this paper is to answer a question which has come up in the area of tabular parsing, namely how to obtain a parsing algorithm with the property that the table will contain as little entries as possible, but without the possibility that two entries represent the same subderivation.<|reference_end|> | arxiv | @article{nederhof1994an,
title={An Optimal Tabular Parsing Algorithm},
author={Mark-Jan Nederhof (University of Nijmegen)},
journal={In Proceedings of ACL-94},
year={1994},
number={To appear in ACL-94},
archivePrefix={arXiv},
eprint={cmp-lg/9405025},
primaryClass={cmp-lg cs.CL}
} | nederhof1994an |
arxiv-668322 | cmp-lg/9405026 | An Extended Theory of Head-Driven Parsing | <|reference_start|>An Extended Theory of Head-Driven Parsing: We show that more head-driven parsing algorithms can be formulated than those occurring in the existing literature. These algorithms are inspired by a family of left-to-right parsing algorithms from a recent publication. We further introduce a more advanced notion of ``head-driven parsing'' which allows more detailed specification of the processing order of non-head elements in the right-hand side. We develop a parsing algorithm for this strategy, based on LR parsing techniques.<|reference_end|> | arxiv | @article{nederhof1994an,
title={An Extended Theory of Head-Driven Parsing},
author={Mark-Jan Nederhof (University of Nijmegen) and Giorgio Satta
(University of Padova)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405026},
primaryClass={cmp-lg cs.CL}
} | nederhof1994an |
arxiv-668323 | cmp-lg/9405027 | Acquiring Receptive Morphology: A Connectionist Model | <|reference_start|>Acquiring Receptive Morphology: A Connectionist Model: This paper describes a modular connectionist model of the acquisition of receptive inflectional morphology. The model takes inputs in the form of phones one at a time and outputs the associated roots and inflections. Simulations using artificial language stimuli demonstrate the capacity of the model to learn suffixation, prefixation, infixation, circumfixation, mutation, template, and deletion rules. Separate network modules responsible for syllables enable to the network to learn simple reduplication rules as well. The model also embodies constraints against association-line crossing.<|reference_end|> | arxiv | @article{gasser1994acquiring,
title={Acquiring Receptive Morphology: A Connectionist Model},
author={Michael Gasser},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405027},
primaryClass={cmp-lg cs.CL}
} | gasser1994acquiring |
arxiv-668324 | cmp-lg/9405028 | Semantics of Complex Sentences in Japanese | <|reference_start|>Semantics of Complex Sentences in Japanese: The important part of semantics of complex sentence is captured as relations among semantic roles in subordinate and main clause respectively. However if there can be relations between every pair of semantic roles, the amount of computation to identify the relations that hold in the given sentence is extremely large. In this paper, for semantics of Japanese complex sentence, we introduce new pragmatic roles called `observer' and `motivated' respectively to bridge semantic roles of subordinate and those of main clauses. By these new roles constraints on the relations among semantic/pragmatic roles are known to be almost local within subordinate or main clause. In other words, as for the semantics of the whole complex sentence, the only role we should deal with is a motivated.<|reference_end|> | arxiv | @article{nakagawa1994semantics,
title={Semantics of Complex Sentences in Japanese},
author={Hiroshi Nakagawa, and Shin'ichiro Nishizawa},
journal={arXiv preprint arXiv:cmp-lg/9405028},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405028},
primaryClass={cmp-lg cs.CL}
} | nakagawa1994semantics |
arxiv-668325 | cmp-lg/9405029 | Structural Tags, Annealing and Automatic Word Classification | <|reference_start|>Structural Tags, Annealing and Automatic Word Classification: This paper describes an automatic word classification system which uses a locally optimal annealing algorithm and average class mutual information. A new word-class representation, the structural tag is introduced and its advantages for use in statistical language modelling are presented. A summary of some results with the one million word LOB corpus is given; the algorithm is also shown to discover the vowel-consonant distinction and displays an ability to cluster words syntactically in a Latin corpus. Finally, a comparison is made between the current classification system and several leading alternative systems, which shows that the current system performs respectably well.<|reference_end|> | arxiv | @article{mcmahon1994structural,
title={Structural Tags, Annealing and Automatic Word Classification},
author={John McMahon and F.J.Smith (Queen's University, Belfast)},
journal={arXiv preprint arXiv:cmp-lg/9405029},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405029},
primaryClass={cmp-lg cs.CL}
} | mcmahon1994structural |
arxiv-668326 | cmp-lg/9405030 | Priority Union and Generalization in Discourse Grammars | <|reference_start|>Priority Union and Generalization in Discourse Grammars: We describe an implementation in Carpenter's typed feature formalism, ALE, of a discourse grammar of the kind proposed by Scha, Polanyi, et al. We examine their method for resolving parallelism-dependent anaphora and show that there is a coherent feature-structural rendition of this type of grammar which uses the operations of priority union and generalization. We describe an augmentation of the ALE system to encompass these operations and we show that an appropriate choice of definition for priority union gives the desired multiple output for examples of VP-ellipsis which exhibit a strict/sloppy ambiguity.<|reference_end|> | arxiv | @article{grover1994priority,
title={Priority Union and Generalization in Discourse Grammars},
author={Claire Grover, Chris Brew, Suresh Manandhar, Marc Moens (HCRC Language
Technology Group, University of Edinburgh)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405030},
primaryClass={cmp-lg cs.CL}
} | grover1994priority |
arxiv-668327 | cmp-lg/9405031 | An Attributive Logic of Set Descriptions and Set Operations | <|reference_start|>An Attributive Logic of Set Descriptions and Set Operations: This paper provides a model theoretic semantics to feature terms augmented with set descriptions. We provide constraints to specify HPSG style set descriptions, fixed cardinality set descriptions, set-membership constraints, restricted universal role quantifications, set union, intersection, subset and disjointness. A sound, complete and terminating consistency checking procedure is provided to determine the consistency of any given term in the logic. It is shown that determining consistency of terms is a NP-complete problem.<|reference_end|> | arxiv | @article{manandhar1994an,
title={An Attributive Logic of Set Descriptions and Set Operations},
author={Suresh Manandhar (HCRC Language Technology Group, The University of
Edinburgh, UK)},
journal={arXiv preprint arXiv:cmp-lg/9405031},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405031},
primaryClass={cmp-lg cs.CL}
} | manandhar1994an |
arxiv-668328 | cmp-lg/9405032 | Modularity in a Connectionist Model of Morphology Acquisition | <|reference_start|>Modularity in a Connectionist Model of Morphology Acquisition: This paper describes a modular connectionist model of the acquisition of receptive inflectional morphology. The model takes inputs in the form of phones one at a time and outputs the associated roots and inflections. In its simplest version, the network consists of separate simple recurrent subnetworks for root and inflection identification; both networks take the phone sequence as inputs. It is shown that the performance of the two separate modular networks is superior to a single network responsible for both root and inflection identification. In a more elaborate version of the model, the network learns to use separate hidden-layer modules to solve the separate tasks of root and inflection identification.<|reference_end|> | arxiv | @article{gasser1994modularity,
title={Modularity in a Connectionist Model of Morphology Acquisition},
author={Michael Gasser (Indiana University)},
journal={Proceedings of COLING 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405032},
primaryClass={cmp-lg cs.CL}
} | gasser1994modularity |
arxiv-668329 | cmp-lg/9405033 | Relating Complexity to Practical Performance in Parsing with Wide-Coverage Unification Grammars | <|reference_start|>Relating Complexity to Practical Performance in Parsing with Wide-Coverage Unification Grammars: The paper demonstrates that exponential complexities with respect to grammar size and input length have little impact on the performance of three unification-based parsing algorithms, using a wide-coverage grammar. The results imply that the study and optimisation of unification-based parsing must rely on empirical data until complexity theory can more accurately predict the practical behaviour of such parsers.<|reference_end|> | arxiv | @article{carroll1994relating,
title={Relating Complexity to Practical Performance in Parsing with
Wide-Coverage Unification Grammars},
author={John Carroll (University of Cambridge, Computer Laboratory)},
journal={32nd Annual Meeting of the ACL, 287-294},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405033},
primaryClass={cmp-lg cs.CL}
} | carroll1994relating |
arxiv-668330 | cmp-lg/9405034 | Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation | <|reference_start|>Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and Its Automatic Evaluation: To acquire noun phrases from running texts is useful for many applications, such as word grouping,terminology indexing, etc. The reported literatures adopt pure probabilistic approach, or pure rule-based noun phrases grammar to tackle this problem. In this paper, we apply a probabilistic chunker to deciding the implicit boundaries of constituents and utilize the linguistic knowledge to extract the noun phrases by a finite state mechanism. The test texts are SUSANNE Corpus and the results are evaluated by comparing the parse field of SUSANNE Corpus automatically. The results of this preliminary experiment are encouraging.<|reference_end|> | arxiv | @article{chen1994extracting,
title={Extracting Noun Phrases from Large-Scale Texts: A Hybrid Approach and
Its Automatic Evaluation},
author={Kuang-hua Chen (National Taiwan University) and Hsin-Hsi Chen
(National Taiwan University)},
journal={arXiv preprint arXiv:cmp-lg/9405034},
year={1994},
number={To appear in ACL-94},
archivePrefix={arXiv},
eprint={cmp-lg/9405034},
primaryClass={cmp-lg cs.CL}
} | chen1994extracting |
arxiv-668331 | cmp-lg/9405035 | Dual-Coding Theory and Connectionist Lexical Selection | <|reference_start|>Dual-Coding Theory and Connectionist Lexical Selection: We introduce the bilingual dual-coding theory as a model for bilingual mental representation. Based on this model, lexical selection neural networks are implemented for a connectionist transfer project in machine translation. This lexical selection approach has two advantages. First, it is learnable. Little human effort on knowledge engineering is required. Secondly, it is psycholinguistically well-founded.<|reference_end|> | arxiv | @article{wang1994dual-coding,
title={Dual-Coding Theory and Connectionist Lexical Selection},
author={Ye-Yi Wang (Carnegie Mellon University)},
journal={arXiv preprint arXiv:cmp-lg/9405035},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9405035},
primaryClass={cmp-lg cs.CL}
} | wang1994dual-coding |
arxiv-668332 | cmp-lg/9406001 | Intentions and Information in Discourse | <|reference_start|>Intentions and Information in Discourse: This paper is about the flow of inference between communicative intentions, discourse structure and the domain during discourse processing. We augment a theory of discourse interpretation with a theory of distinct mental attitudes and reasoning about them, in order to provide an account of how the attitudes interact with reasoning about discourse structure.<|reference_end|> | arxiv | @article{asher1994intentions,
title={Intentions and Information in Discourse},
author={Nicholas Asher (IRIT, Universite Paul Sabatier, Toulouse), Alex
Lascarides (Department of Linguistics, Stanford University)},
journal={Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406001},
primaryClass={cmp-lg cs.CL}
} | asher1994intentions |
arxiv-668333 | cmp-lg/9406002 | Speech Dialogue with Facial Displays: Multimodal Human-Computer Conversation | <|reference_start|>Speech Dialogue with Facial Displays: Multimodal Human-Computer Conversation: Human face-to-face conversation is an ideal model for human-computer dialogue. One of the major features of face-to-face communication is its multiplicity of communication channels that act on multiple modalities. To realize a natural multimodal dialogue, it is necessary to study how humans perceive information and determine the information to which humans are sensitive. A face is an independent communication channel that conveys emotional and conversational signals, encoded as facial expressions. We have developed an experimental system that integrates speech dialogue and facial animation, to investigate the effect of introducing communicative facial expressions as a new modality in human-computer conversation. Our experiments have shown that facial expressions are helpful, especially upon first contact with the system. We have also discovered that featuring facial expressions at an early stage improves subsequent interaction.<|reference_end|> | arxiv | @article{nagao1994speech,
title={Speech Dialogue with Facial Displays: Multimodal Human-Computer
Conversation},
author={Katashi Nagao (Sony Computer Science Laboratory Inc.) and Akikazu
Takeuchi (Sony Computer Science Laboratory Inc.)},
journal={arXiv preprint arXiv:cmp-lg/9406002},
year={1994},
number={To appear in Proceedings of ACL-94},
archivePrefix={arXiv},
eprint={cmp-lg/9406002},
primaryClass={cmp-lg cs.CL}
} | nagao1994speech |
arxiv-668334 | cmp-lg/9406003 | A Learning Approach to Natural Language Understanding | <|reference_start|>A Learning Approach to Natural Language Understanding: In this paper we propose a learning paradigm for the problem of understanding spoken language. The basis of the work is in a formalization of the understanding problem as a communication problem. This results in the definition of a stochastic model of the production of speech or text starting from the meaning of a sentence. The resulting understanding algorithm consists in a Viterbi maximization procedure, analogous to that commonly used for recognizing speech. The algorithm was implemented for building<|reference_end|> | arxiv | @article{pieraccini1994a,
title={A Learning Approach to Natural Language Understanding},
author={Roberto Pieraccini (AT&T Bell Laboratories), Esther Levin (AT&T Bell
Laboratories)},
journal={"New Advances and Trends in Speech Recognition and Coding", NATO
ASI Series, Springer-Verlag, proceedings of the 1993 NATO ASI Summer School,
Bubion, Spain, June-July 1993},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406003},
primaryClass={cmp-lg cs.CL}
} | pieraccini1994a |
arxiv-668335 | cmp-lg/9406004 | Towards a Principled Representation of Discourse Plans | <|reference_start|>Towards a Principled Representation of Discourse Plans: We argue that discourse plans must capture the intended causal and decompositional relations between communicative actions. We present a planning algorithm, DPOCL, that builds plan structures that properly capture these relations, and show how these structures are used to solve the problems that plagued previous discourse planners, and allow a system to participate effectively and flexibly in an ongoing dialogue.<|reference_end|> | arxiv | @article{young1994towards,
title={Towards a Principled Representation of Discourse Plans},
author={R. Michael Young (Intelligent Systems Program, University of
Pittsburgh), Johanna D. Moore (Department of Computer Science and Learning
Research and Development Center, University of Pittsburgh) and Martha E.
Pollack (Department of Computer Science and Intelligent Systems Program,
University of Pittsburgh)},
journal={To appear in Proceedings of the Sixteenth Annual Conference of the
Cognitive Science Society, Atlanta, Ga, August, 1994},
year={1994},
number={ISP Technical Report# 94-2},
archivePrefix={arXiv},
eprint={cmp-lg/9406004},
primaryClass={cmp-lg cs.CL}
} | young1994towards |
arxiv-668336 | cmp-lg/9406005 | Word-Sense Disambiguation Using Decomposable Models | <|reference_start|>Word-Sense Disambiguation Using Decomposable Models: Most probabilistic classifiers used for word-sense disambiguation have either been based on only one contextual feature or have used a model that is simply assumed to characterize the interdependencies among multiple contextual features. In this paper, a different approach to formulating a probabilistic model is presented along with a case study of the performance of models produced in this manner for the disambiguation of the noun "interest". We describe a method for formulating probabilistic models that use multiple contextual features for word-sense disambiguation, without requiring untested assumptions regarding the form of the model. Using this approach, the joint distribution of all variables is described by only the most systematic variable interactions, thereby limiting the number of parameters to be estimated, supporting computational efficiency, and providing an understanding of the data.<|reference_end|> | arxiv | @article{bruce1994word-sense,
title={Word-Sense Disambiguation Using Decomposable Models},
author={Rebecca Bruce and Janyce Wiebe (New Mexico State University)},
journal={arXiv preprint arXiv:cmp-lg/9406005},
year={1994},
number={To appear in ACL-94},
archivePrefix={arXiv},
eprint={cmp-lg/9406005},
primaryClass={cmp-lg cs.CL}
} | bruce1994word-sense |
arxiv-668337 | cmp-lg/9406006 | Detecting and Correcting Speech Repairs | <|reference_start|>Detecting and Correcting Speech Repairs: Interactive spoken dialog provides many new challenges for spoken language systems. One of the most critical is the prevalence of speech repairs. This paper presents an algorithm that detects and corrects speech repairs based on finding the repair pattern. The repair pattern is built by finding word matches and word replacements, and identifying fragments and editing terms. Rather than using a set of prebuilt templates, we build the pattern on the fly. In a fair test, our method, when combined with a statistical model to filter possible repairs, was successful at detecting and correcting 80\% of the repairs, without using prosodic information or a parser.<|reference_end|> | arxiv | @article{heeman1994detecting,
title={Detecting and Correcting Speech Repairs},
author={Peter Heeman (U of Rochester) and James Allen (U of Rochester)},
journal={arXiv preprint arXiv:cmp-lg/9406006},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406006},
primaryClass={cmp-lg cs.CL}
} | heeman1994detecting |
arxiv-668338 | cmp-lg/9406007 | Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria | <|reference_start|>Aligning a Parallel English-Chinese Corpus Statistically with Lexical Criteria: We describe our experience with automatic alignment of sentences in parallel English-Chinese texts. Our report concerns three related topics: (1) progress on the HKUST English-Chinese Parallel Bilingual Corpus; (2) experiments addressing the applicability of Gale & Church's length-based statistical method to the task of alignment involving a non-Indo-European language; and (3) an improved statistical method that also incorporates domain-specific lexical cues.<|reference_end|> | arxiv | @article{wu1994aligning,
title={Aligning a Parallel English-Chinese Corpus Statistically with Lexical
Criteria},
author={Dekai Wu (Hong Kong University of Science & Technology)},
journal={In Proceedings of ACL-94.},
year={1994},
number={HKUST-CS93-9 (revised)},
archivePrefix={arXiv},
eprint={cmp-lg/9406007},
primaryClass={cmp-lg cs.CL}
} | wu1994aligning |
arxiv-668339 | cmp-lg/9406008 | Parsing Turkish with the Lexical Functional Grammar Formalism | <|reference_start|>Parsing Turkish with the Lexical Functional Grammar Formalism: This paper describes our work on parsing Turkish using the lexical-functional grammar formalism. This work represents the first significant effort for parsing Turkish. Our implementation is based on Tomita's parser developed at Carnegie-Mellon University Center for Machine Translation. The grammar covers a substantial subset of Turkish including simple and complex sentences, and deals with a reasonable amount of word order freeness. The complex agglutinative morphology of Turkish lexical structures is handled using a separate two-level morphological analyzer. After a discussion of key relevant issues regarding Turkish grammar, we discuss aspects of our system and present results from our implementation. Our initial results suggest that our system can parse about 82\% of the sentences directly and almost all the remaining with very minor pre-editing.<|reference_end|> | arxiv | @article{gungordu1994parsing,
title={Parsing Turkish with the Lexical Functional Grammar Formalism},
author={Zelal Gungordu (Center for Cognitive Science, Univ. of Edinburgh) and
Kemal Oflazer (Dept of Computer Engineering, Bilkent University, Ankara)},
journal={Proceedings of COLING'94},
year={1994},
number={(BU-CEIS-9402 Bilkent University CS Dept Tech Report)},
archivePrefix={arXiv},
eprint={cmp-lg/9406008},
primaryClass={cmp-lg cs.CL}
} | gungordu1994parsing |
arxiv-668340 | cmp-lg/9406009 | Multiset-Valued Linear Index Grammars: Imposing Dominance Constraints on Derivations | <|reference_start|>Multiset-Valued Linear Index Grammars: Imposing Dominance Constraints on Derivations: This paper defines multiset-valued linear index grammar and unordered vector grammar with dominance links. The former models certain uses of multiset-valued feature structures in unification-based formalisms, while the latter is motivated by word order variation and by ``quasi-trees'', a generalization of trees. The two formalisms are weakly equivalent, and an important subset is at most context-sensitive and polynomially parsable.<|reference_end|> | arxiv | @article{rambow1994multiset-valued,
title={Multiset-Valued Linear Index Grammars: Imposing Dominance Constraints on
Derivations},
author={Owen Rambow (Universite Paris 7)},
journal={Proc ACL 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406009},
primaryClass={cmp-lg cs.CL}
} | rambow1994multiset-valued |
arxiv-668341 | cmp-lg/9406010 | Some Advances in Transformation-Based Part of Speech Tagging | <|reference_start|>Some Advances in Transformation-Based Part of Speech Tagging: Most recent research in trainable part of speech taggers has explored stochastic tagging. While these taggers obtain high accuracy, linguistic information is captured indirectly, typically in tens of thousands of lexical and contextual probabilities. In [Brill92], a trainable rule-based tagger was described that obtained performance comparable to that of stochastic taggers, but captured relevant linguistic information in a small number of simple non-stochastic rules. In this paper, we describe a number of extensions to this rule-based tagger. First, we describe a method for expressing lexical relations in tagging that are not captured by stochastic taggers. Next, we show a rule-based approach to tagging unknown words. Finally, we show how the tagger can be extended into a k-best tagger, where multiple tags can be assigned to words in some cases of uncertainty.<|reference_end|> | arxiv | @article{brill1994some,
title={Some Advances in Transformation-Based Part of Speech Tagging},
author={Eric Brill (MIT)},
journal={Proceedings of AAAI94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406010},
primaryClass={cmp-lg cs.CL}
} | brill1994some |
arxiv-668342 | cmp-lg/9406011 | Exploring the Statistical Derivation of Transformational Rule Sequences for Part-of-Speech Tagging | <|reference_start|>Exploring the Statistical Derivation of Transformational Rule Sequences for Part-of-Speech Tagging: Eric Brill has recently proposed a simple and powerful corpus-based language modeling approach that can be applied to various tasks including part-of-speech tagging and building phrase structure trees. The method learns a series of symbolic transformational rules, which can then be applied in sequence to a test corpus to produce predictions. The learning process only requires counting matches for a given set of rule templates, allowing the method to survey a very large space of possible contextual factors. This paper analyses Brill's approach as an interesting variation on existing decision tree methods, based on experiments involving part-of-speech tagging for both English and ancient Greek corpora. In particular, the analysis throws light on why the new mechanism seems surprisingly resistant to overtraining. A fast, incremental implementation and a mechanism for recording the dependencies that underlie the resulting rule sequence are also described.<|reference_end|> | arxiv | @article{ramshaw1994exploring,
title={Exploring the Statistical Derivation of Transformational Rule Sequences
for Part-of-Speech Tagging},
author={Lance A. Ramshaw (Univ. of Pennsylvania and Bowdoin College), and
Mitchell P. Marcus (Univ. of Pennsylvania)},
journal={ACL Balancing Act Workshop proceedings, July 94, pp. 86-95},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406011},
primaryClass={cmp-lg cs.CL}
} | ramshaw1994exploring |
arxiv-668343 | cmp-lg/9406012 | Self-Organizing Machine Translation: Example-Driven Induction of Transfer Functions | <|reference_start|>Self-Organizing Machine Translation: Example-Driven Induction of Transfer Functions: With the advent of faster computers, the notion of doing machine translation from a huge stored database of translation examples is no longer unreasonable. This paper describes an attempt to merge the Example-Based Machine Translation (EBMT) approach with psycholinguistic principles. A new formalism for context- free grammars, called *marker-normal form*, is demonstrated and used to describe language data in a way compatible with psycholinguistic theories. By embedding this formalism in a standard multivariate optimization framework, a system can be built that infers correct transfer functions for a set of bilingual sentence pairs and then uses those functions to translate novel sentences. The validity of this line of reasoning has been tested in the development of a system called METLA-1. This system has been used to infer English->French and English->Urdu transfer functions from small corpora. The results of those experiments are examined, both in engineering terms as well as in more linguistic terms. In general, the results of these experiments were psycho- logically and linguistically well-grounded while still achieving a respectable level of success when compared against a similar prototype using Hidden Markov Models.<|reference_end|> | arxiv | @article{juola1994self-organizing,
title={Self-Organizing Machine Translation: Example-Driven Induction of
Transfer Functions},
author={Patrick Juola (University of Colorado)},
journal={arXiv preprint arXiv:cmp-lg/9406012},
year={1994},
number={CU-CS-722-94},
archivePrefix={arXiv},
eprint={cmp-lg/9406012},
primaryClass={cmp-lg cs.CL}
} | juola1994self-organizing |
arxiv-668344 | cmp-lg/9406013 | Graded Unification: A Framework for Interactive Processing | <|reference_start|>Graded Unification: A Framework for Interactive Processing: An extension to classical unification, called {\em graded unification} is presented. It is capable of combining contradictory information. An interactive processing paradigm and parser based on this new operator are also presented.<|reference_end|> | arxiv | @article{kim1994graded,
title={Graded Unification: A Framework for Interactive Processing},
author={Albert Kim (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9406013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406013},
primaryClass={cmp-lg cs.CL}
} | kim1994graded |
arxiv-668345 | cmp-lg/9406014 | A Hybrid Reasoning Model for Indirect Answers | <|reference_start|>A Hybrid Reasoning Model for Indirect Answers: This paper presents our implemented computational model for interpreting and generating indirect answers to Yes-No questions. Its main features are 1) a discourse-plan-based approach to implicature, 2) a reversible architecture for generation and interpretation, 3) a hybrid reasoning model that employs both plan inference and logical inference, and 4) use of stimulus conditions to model a speaker's motivation for providing appropriate, unrequested information. The model handles a wider range of types of indirect answers than previous computational models and has several significant advantages.<|reference_end|> | arxiv | @article{green1994a,
title={A Hybrid Reasoning Model for Indirect Answers},
author={Nancy Green (University of Delaware) and Sandra Carberry (University
of Delaware and Visitor: IRCS, University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9406014},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406014},
primaryClass={cmp-lg cs.CL}
} | green1994a |
arxiv-668346 | cmp-lg/9406015 | Statistical Augmentation of a Chinese Machine-Readable Dictionary | <|reference_start|>Statistical Augmentation of a Chinese Machine-Readable Dictionary: We describe a method of using statistically-collected Chinese character groups from a corpus to augment a Chinese dictionary. The method is particularly useful for extracting domain-specific and regional words not readily available in machine-readable dictionaries. Output was evaluated both using human evaluators and against a previously available dictionary. We also evaluated performance improvement in automatic Chinese tokenization. Results show that our method outputs legitimate words, acronymic constructions, idioms, names and titles, as well as technical compounds, many of which were lacking from the original dictionary.<|reference_end|> | arxiv | @article{fung1994statistical,
title={Statistical Augmentation of a Chinese Machine-Readable Dictionary},
author={Pascale Fung (Columbia University) and Dekai Wu (Hong Kong University
of Science & Technology)},
journal={In WVLC-94, Second Annual Workshop on Very Large},
year={1994},
number={cucs-015-94},
archivePrefix={arXiv},
eprint={cmp-lg/9406015},
primaryClass={cmp-lg cs.CL}
} | fung1994statistical |
arxiv-668347 | cmp-lg/9406016 | Corpus-Driven Knowledge Acquisition for Discourse Analysis | <|reference_start|>Corpus-Driven Knowledge Acquisition for Discourse Analysis: The availability of large on-line text corpora provides a natural and promising bridge between the worlds of natural language processing (NLP) and machine learning (ML). In recent years, the NLP community has been aggressively investigating statistical techniques to drive part-of-speech taggers, but application-specific text corpora can be used to drive knowledge acquisition at much higher levels as well. In this paper we will show how ML techniques can be used to support knowledge acquisition for information extraction systems. It is often very difficult to specify an explicit domain model for many information extraction applications, and it is always labor intensive to implement hand-coded heuristics for each new domain. We have discovered that it is nevertheless possible to use ML algorithms in order to capture knowledge that is only implicitly present in a representative text corpus. Our work addresses issues traditionally associated with discourse analysis and intersentential inference generation, and demonstrates the utility of ML algorithms at this higher level of language analysis. The benefits of our work address the portability and scalability of information extraction (IE) technologies. When hand-coded heuristics are used to manage discourse analysis in an information extraction system, months of programming effort are easily needed to port a successful IE system to a new domain. We will show how ML algorithms can reduce this<|reference_end|> | arxiv | @article{soderland1994corpus-driven,
title={Corpus-Driven Knowledge Acquisition for Discourse Analysis},
author={Stephen Soderland and Wendy Lehnert (University of Massachusetts)},
journal={arXiv preprint arXiv:cmp-lg/9406016},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406016},
primaryClass={cmp-lg cs.CL}
} | soderland1994corpus-driven |
arxiv-668348 | cmp-lg/9406017 | An Automatic Method of Finding Topic Boundaries | <|reference_start|>An Automatic Method of Finding Topic Boundaries: This article outlines a new method of locating discourse boundaries based on lexical cohesion and a graphical technique called dotplotting. The application of dotplotting to discourse segmentation can be performed either manually, by examining a graph, or automatically, using an optimization algorithm. The results of two experiments involving automatically locating boundaries between a series of concatenated documents are presented. Areas of application and future directions for this work are also outlined.<|reference_end|> | arxiv | @article{reynar1994an,
title={An Automatic Method of Finding Topic Boundaries},
author={Jeffrey C. Reynar (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9406017},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406017},
primaryClass={cmp-lg cs.CL}
} | reynar1994an |
arxiv-668349 | cmp-lg/9406018 | TDL--- A Type Description Language for Constraint-Based Grammars | <|reference_start|>TDL--- A Type Description Language for Constraint-Based Grammars: This paper presents \tdl, a typed feature-based representation language and inference system. Type definitions in \tdl\ consist of type and feature constraints over the boolean connectives. \tdl\ supports open- and closed-world reasoning over types and allows for partitions and incompatible types. Working with partially as well as with fully expanded types is possible. Efficient reasoning in \tdl\ is accomplished through specialized modules.<|reference_end|> | arxiv | @article{krieger1994tdl---,
title={TDL--- A Type Description Language for Constraint-Based Grammars},
author={Hans-Ulrich Krieger, Ulrich Sch"afer},
journal={arXiv preprint arXiv:cmp-lg/9406018},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406018},
primaryClass={cmp-lg cs.CL}
} | krieger1994tdl--- |
arxiv-668350 | cmp-lg/9406019 | A Complete and Recursive Feature Theory | <|reference_start|>A Complete and Recursive Feature Theory: Various feature descriptions are being employed in logic programming languages and constrained-based grammar formalisms. The common notational primitive of these descriptions are functional attributes called features. The descriptions considered in this paper are the possibly quantified first-order formulae obtained from a signature of binary and unary predicates called features and sorts, respectively. We establish a first-order theory FT by means of three axiom schemes, show its completeness, and construct three elementarily equivalent models. One of the models consists of so-called feature graphs, a data structure common in computational linguistics. The other two models consist of so-called feature trees, a record-like data structure generalizing the trees corresponding to first-order terms. Our completeness proof exhibits a terminating simplification system deciding validity and satisfiability of possibly quantified feature descriptions.<|reference_end|> | arxiv | @article{backofen1994a,
title={A Complete and Recursive Feature Theory},
author={Rolf Backofen, Gert Smolka},
journal={arXiv preprint arXiv:cmp-lg/9406019},
year={1994},
number={RR-92-30},
archivePrefix={arXiv},
eprint={cmp-lg/9406019},
primaryClass={cmp-lg cs.CL}
} | backofen1994a |
arxiv-668351 | cmp-lg/9406020 | DPOCL: A Principled Approach to Discourse Planning | <|reference_start|>DPOCL: A Principled Approach to Discourse Planning: Research in discourse processing has identified two representational requirements for discourse planning systems. First, discourse plans must adequately represent the intentional structure of the utterances they produce in order to enable a computational discourse agent to respond effectively to communicative failures \cite{MooreParisCL}. Second, discourse plans must represent the informational structure of utterances. In addition to these representational requirements, we argue that discourse planners should be formally characterizable in terms of soundness and completeness.<|reference_end|> | arxiv | @article{young1994dpocl:,
title={DPOCL: A Principled Approach to Discourse Planning},
author={R. Michael Young (Intelligent Systems Program, University of
Pittsburgh) and Johanna D. Moore (Department of Computer Science and Learning
Research and Development Center, University of Pittsburgh)},
journal={proceedings of the Seventh International Workshop on Natural
Langauge Generation, Kennebunkport, ME, June, 1994},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406020},
primaryClass={cmp-lg cs.CL}
} | young1994dpocl: |
arxiv-668352 | cmp-lg/9406021 | A symbolic description of punning riddles and its computer implementation | <|reference_start|>A symbolic description of punning riddles and its computer implementation: Riddles based on simple puns can be classified according to the patterns of word, syllable or phrase similarity they depend upon. We have devised a formal model of the semantic and syntactic regularities underlying some of the simpler types of punning riddle. We have also implemented this preliminary theory in a computer program which can generate riddles from a lexicon containing general data about words and phrases; that is, the lexicon content is not customised to produce jokes. Informal evaluation of the program's results by a set of human judges suggest that the riddles produced by this program are of comparable quality to those in general circulation among school children.<|reference_end|> | arxiv | @article{binsted1994a,
title={A symbolic description of punning riddles and its computer
implementation},
author={Kim Binsted (Department of Artificial Intelligence, University of
Edinburgh), Graeme Ritchie (Department of Artificial Intelligence, University
of Edinburgh)},
journal={arXiv preprint arXiv:cmp-lg/9406021},
year={1994},
number={688 (Dept of AI). Submitted to "Humor"},
archivePrefix={arXiv},
eprint={cmp-lg/9406021},
primaryClass={cmp-lg cs.CL}
} | binsted1994a |
arxiv-668353 | cmp-lg/9406022 | An implemented model of punning riddles | <|reference_start|>An implemented model of punning riddles: In this paper, we discuss a model of simple question-answer punning, implemented in a program, JAPE, which generates riddles from humour-independent lexical entries. The model uses two main types of structure: schemata, which determine the relationships between key words in a joke, and templates, which produce the surface form of the joke. JAPE succeeds in generating pieces of text that are recognizably jokes, but some of them are not very good jokes. We mention some potential improvements and extensions, including post-production heuristics for ordering the jokes according to quality.<|reference_end|> | arxiv | @article{binsted1994an,
title={An implemented model of punning riddles},
author={Kim Binsted (Department of Artificial Intelligence, University of
Edinburgh), Graeme Ritchie (Department of Artificial Intelligence, University
of Edinburgh)},
journal={In proceedings of AAAI-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406022},
primaryClass={cmp-lg cs.CL}
} | binsted1994an |
arxiv-668354 | cmp-lg/9406023 | A Spanish Tagset for the CRATER Project | <|reference_start|>A Spanish Tagset for the CRATER Project: This working paper describes the Spanish tagset to be used in the context of CRATER, a CEC funded project aiming at the creation of a multilingual (English, French, Spanish) aligned corpus using the International Telecommunications Union corpus. In this respect, each version of the corpus will be (or is currently) tagged. Xerox PARC tagger will be adapted to Spanish in order to perform the tagging of the Spanish version. This tagset has been devised as the ideal one for Spanish, and has been posted to several lists in order to get feedback to it.<|reference_end|> | arxiv | @article{león1994a,
title={A Spanish Tagset for the CRATER Project},
author={Fernando S'anchez Le'on (Laboratorio de Ling"u'istica
Inform'atica, Facultad de Filosof'ia y Letras, Universidad Aut'onoma de
Madrid)},
journal={arXiv preprint arXiv:cmp-lg/9406023},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406023},
primaryClass={cmp-lg cs.CL}
} | león1994a |
arxiv-668355 | cmp-lg/9406024 | Learning Fault-tolerant Speech Parsing with SCREEN | <|reference_start|>Learning Fault-tolerant Speech Parsing with SCREEN: This paper describes a new approach and a system SCREEN for fault-tolerant speech parsing. SCREEEN stands for Symbolic Connectionist Robust EnterprisE for Natural language. Speech parsing describes the syntactic and semantic analysis of spontaneous spoken language. The general approach is based on incremental immediate flat analysis, learning of syntactic and semantic speech parsing, parallel integration of current hypotheses, and the consideration of various forms of speech related errors. The goal for this approach is to explore the parallel interactions between various knowledge sources for learning incremental fault-tolerant speech parsing. This approach is examined in a system SCREEN using various hybrid connectionist techniques. Hybrid connectionist techniques are examined because of their promising properties of inherent fault tolerance, learning, gradedness and parallel constraint integration. The input for SCREEN is hypotheses about recognized words of a spoken utterance potentially analyzed by a speech system, the output is hypotheses about the flat syntactic and semantic analysis of the utterance. In this paper we focus on the general approach, the overall architecture, and examples for learning flat syntactic speech parsing. Different from most other speech language architectures SCREEN emphasizes an interactive rather than an autonomous position, learning rather than encoding, flat analysis rather than in-depth analysis, and fault-tolerant processing of phonetic, syntactic and semantic knowledge.<|reference_end|> | arxiv | @article{wermter1994learning,
title={Learning Fault-tolerant Speech Parsing with SCREEN},
author={Stefan Wermter (Dept. of Computer Science, University of Hamburg, FRG)
and Volker Weber (Dept. of Computer Science, University of Hamburg, FRG)},
journal={arXiv preprint arXiv:cmp-lg/9406024},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406024},
primaryClass={cmp-lg cs.CL}
} | wermter1994learning |
arxiv-668356 | cmp-lg/9406025 | Emergent Parsing and Generation with Generalized Chart | <|reference_start|>Emergent Parsing and Generation with Generalized Chart: A new, flexible inference method for Horn logic program is proposed, which is a drastic generalization of chart parsing, partial instantiations of clauses in a program roughly corresponding to arcs in a chart. Chart-like parsing and semantic-head-driven generation emerge from this method. With a parsimonious instantiation scheme for ambiguity packing, the parsing complexity reduces to that of standard chart-based algorithms.<|reference_end|> | arxiv | @article{koiti1994emergent,
title={Emergent Parsing and Generation with Generalized Chart},
author={HASIDA Koiti (Electrotechnical Laboratory)},
journal={arXiv preprint arXiv:cmp-lg/9406025},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406025},
primaryClass={cmp-lg cs.CL}
} | koiti1994emergent |
arxiv-668357 | cmp-lg/9406026 | The Very Idea of Dynamic Semantics | <|reference_start|>The Very Idea of Dynamic Semantics: "Natural languages are programming languages for minds." Can we or should we take this slogan seriously? If so, how? Can answers be found by looking at the various "dynamic" treatments of natural language developed over the last decade or so, mostly in response to problems associated with donkey anaphora? In Dynamic Logic of Programs, the meaning of a program is a binary relation on the set of states of some abstract machine. This relation is meant to model aspects of the effects of the execution of the program, in particular its input-output behavior. What, if anything, are the dynamic aspects of various proposed dynamic semantics for natural languages supposed to model? Is there anything dynamic to be modeled? If not, what is all the full about? We shall try to answer some, at least, of these questions and provide materials for answers to others.<|reference_end|> | arxiv | @article{israel1994the,
title={The Very Idea of Dynamic Semantics},
author={David Israel (Artificial Intelligence Center, SRI)},
journal={Proc. Ninth Amsterdam Colloquium, 1993},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406026},
primaryClass={cmp-lg cs.CL}
} | israel1994the |
arxiv-668358 | cmp-lg/9406027 | Analyzing and Improving Statistical Language Models for Speech Recognition | <|reference_start|>Analyzing and Improving Statistical Language Models for Speech Recognition: In many current speech recognizers, a statistical language model is used to indicate how likely it is that a certain word will be spoken next, given the words recognized so far. How can statistical language models be improved so that more complex speech recognition tasks can be tackled? Since the knowledge of the weaknesses of any theory often makes improving the theory easier, the central idea of this thesis is to analyze the weaknesses of existing statistical language models in order to subsequently improve them. To that end, we formally define a weakness of a statistical language model in terms of the logarithm of the total probability, LTP, a term closely related to the standard perplexity measure used to evaluate statistical language models. We apply our definition of a weakness to a frequently used statistical language model, called a bi-pos model. This results, for example, in a new modeling of unknown words which improves the performance of the model by 14% to 21%. Moreover, one of the identified weaknesses has prompted the development of our generalized N-pos language model, which is also outlined in this thesis. It can incorporate linguistic knowledge even if it extends over many words and this is not feasible in a traditional N-pos model. This leads to a discussion of whatknowledge should be added to statistical language models in general and we give criteria for selecting potentially useful knowledge. These results show the usefulness of both our definition of a weakness and of performing an analysis of weaknesses of statistical language models in general.<|reference_end|> | arxiv | @article{ueberla1994analyzing,
title={Analyzing and Improving Statistical Language Models for Speech
Recognition},
author={Joerg P. Ueberla},
journal={arXiv preprint arXiv:cmp-lg/9406027},
year={1994},
number={SFU-CMPT 94-05-02},
archivePrefix={arXiv},
eprint={cmp-lg/9406027},
primaryClass={cmp-lg cs.CL}
} | ueberla1994analyzing |
arxiv-668359 | cmp-lg/9406028 | Resolution of Syntactic Ambiguity: the Case of New Subjects | <|reference_start|>Resolution of Syntactic Ambiguity: the Case of New Subjects: I review evidence for the claim that syntactic ambiguities are resolved on the basis of the meaning of the competing analyses, not their structure. I identify a collection of ambiguities that do not yet have a meaning-based account and propose one which is based on the interaction of discourse and grammatical function. I provide evidence for my proposal by examining statistical properties of the Penn Treebank of syntactically annotated text.<|reference_end|> | arxiv | @article{niv1994resolution,
title={Resolution of Syntactic Ambiguity: the Case of New Subjects},
author={Michael Niv (University of Pennsylvania)},
journal={COGSCI-93},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406028},
primaryClass={cmp-lg cs.CL}
} | niv1994resolution |
arxiv-668360 | cmp-lg/9406029 | A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation | <|reference_start|>A Computational Model of Syntactic Processing: Ambiguity Resolution from Interpretation: Syntactic ambiguity abounds in natural language, yet humans have no difficulty coping with it. In fact, the process of ambiguity resolution is almost always unconscious. But it is not infallible, however, as example 1 demonstrates. 1. The horse raced past the barn fell. This sentence is perfectly grammatical, as is evident when it appears in the following context: 2. Two horses were being shown off to a prospective buyer. One was raced past a meadow. and the other was raced past a barn. ... Grammatical yet unprocessable sentences such as 1 are called `garden-path sentences.' Their existence provides an opportunity to investigate the human sentence processing mechanism by studying how and when it fails. The aim of this thesis is to construct a computational model of language understanding which can predict processing difficulty. The data to be modeled are known examples of garden path and non-garden path sentences, and other results from psycholinguistics. It is widely believed that there are two distinct loci of computation in sentence processing: syntactic parsing and semantic interpretation. One longstanding controversy is which of these two modules bears responsibility for the immediate resolution of ambiguity. My claim is that it is the latter, and that the syntactic processing module is a very simple device which blindly and faithfully constructs all possible analyses for the sentence up to the current point of processing. The interpretive module serves as a filter, occasionally discarding certain of these analyses which it deems less appropriate for the ongoing discourse than their competitors. This document is divided into three parts. The first is introductory, and reviews a selection of proposals from the sentence processing literature. The second part explores a body of data which has been adduced in support of a theory of structural preferences --- one that is inconsistent with the present claim. I show how the current proposal can be specified to account for the available data, and moreover to predict where structural preference theories will go wrong. The third part is a theoretical investigation of how well the proposed architecture can be realized using current conceptions of linguistic competence. In it, I present a parsing algorithm and a meaning-based ambiguity resolution method.<|reference_end|> | arxiv | @article{niv1994a,
title={A Computational Model of Syntactic Processing: Ambiguity Resolution from
Interpretation},
author={Michael Niv (University of Pennsylvania)},
journal={Dissertation, Computer and Information Science Dept. 1993},
year={1994},
number={IRCS-93-27},
archivePrefix={arXiv},
eprint={cmp-lg/9406029},
primaryClass={cmp-lg cs.CL}
} | niv1994a |
arxiv-668361 | cmp-lg/9406030 | The complexity of normal form rewrite sequences for Associativity | <|reference_start|>The complexity of normal form rewrite sequences for Associativity: The complexity of a particular term-rewrite system is considered: the rule of associativity (x*y)*z --> x*(y*z). Algorithms and exact calculations are given for the longest and shortest sequences of applications of --> that result in normal form (NF). The shortest NF sequence for a term x is always n-drm(x), where n is the number of occurrences of * in x and drm(x) is the depth of the rightmost leaf of x. The longest NF sequence for any term is of length n(n-1)/2.<|reference_end|> | arxiv | @article{niv1994the,
title={The complexity of normal form rewrite sequences for Associativity},
author={Michael Niv (Technion)},
journal={arXiv preprint arXiv:cmp-lg/9406030},
year={1994},
number={Computer Science Department, LCL 94-6},
archivePrefix={arXiv},
eprint={cmp-lg/9406030},
primaryClass={cmp-lg cs.CL}
} | niv1994the |
arxiv-668362 | cmp-lg/9406031 | A Psycholinguistically Motivated Parser for CCG | <|reference_start|>A Psycholinguistically Motivated Parser for CCG: Considering the speed in which humans resolve syntactic ambiguity, and the overwhelming evidence that syntactic ambiguity is resolved through selection of the analysis whose interpretation is the most `sensible', one comes to the conclusion that interpretation, hence parsing take place incrementally, just about every word. Considerations of parsimony in the theory of the syntactic processor lead one to explore the simplest of parsers: one which represents only analyses as defined by the grammar and no other information. Toward this aim of a simple, incremental parser I explore the proposal that the competence grammar is a Combinatory Categorial Grammar (CCG). I address the problem of the proliferating analyses that stem from CCG's associativity of derivation. My solution involves maintaining only the maximally incremental analysis and, when necessary, computing the maximally right-branching analysis. I use results from the study of rewrite systems to show that this computation is efficient.<|reference_end|> | arxiv | @article{niv1994a,
title={A Psycholinguistically Motivated Parser for CCG},
author={Michael Niv (Technion)},
journal={ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406031},
primaryClass={cmp-lg cs.CL}
} | niv1994a |
arxiv-668363 | cmp-lg/9406032 | Anytime Algorithms for Speech Parsing? | <|reference_start|>Anytime Algorithms for Speech Parsing?: This paper discusses to which extent the concept of ``anytime algorithms'' can be applied to parsing algorithms with feature unification. We first try to give a more precise definition of what an anytime algorithm is. We arque that parsing algorithms have to be classified as contract algorithms as opposed to (truly) interruptible algorithms. With the restriction that the transaction being active at the time an interrupt is issued has to be completed before the interrupt can be executed, it is possible to provide a parser with limited anytime behavior, which is in fact being realized in our research prototype.<|reference_end|> | arxiv | @article{goerz1994anytime,
title={Anytime Algorithms for Speech Parsing?},
author={Guenther Goerz (University of Erlangen-Nuernberg, IMMD VIII) and
Marcus Kesseler (University of Erlangen-Nuernberg, IMMD VIII)},
journal={COLING-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406032},
primaryClass={cmp-lg cs.CL}
} | goerz1994anytime |
arxiv-668364 | cmp-lg/9406033 | Verb Semantics and Lexical Selection | <|reference_start|>Verb Semantics and Lexical Selection: This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection.<|reference_end|> | arxiv | @article{wu1994verb,
title={Verb Semantics and Lexical Selection},
author={Zhibiao Wu (National University of Singapore) and Martha Palmer
(University of Pennsylvania)},
journal={Proceedings of ACL 94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406033},
primaryClass={cmp-lg cs.CL}
} | wu1994verb |
arxiv-668365 | cmp-lg/9406034 | Decision Lists for Lexical Ambiguity Resolution: Application to Accent Restoration in Spanish and French | <|reference_start|>Decision Lists for Lexical Ambiguity Resolution: Application to Accent Restoration in Spanish and French: This paper presents a statistical decision procedure for lexical ambiguity resolution. The algorithm exploits both local syntactic patterns and more distant collocational evidence, generating an efficient, effective, and highly perspicuous recipe for resolving a given ambiguity. By identifying and utilizing only the single best disambiguating evidence in a target context, the algorithm avoids the problematic complex modeling of statistical dependencies. Although directly applicable to a wide class of ambiguities, the algorithm is described and evaluated in a realistic case study, the problem of restoring missing accents in Spanish and French text.<|reference_end|> | arxiv | @article{yarowsky1994decision,
title={Decision Lists for Lexical Ambiguity Resolution: Application to Accent
Restoration in Spanish and French},
author={David Yarowsky (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9406034},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406034},
primaryClass={cmp-lg cs.CL}
} | yarowsky1994decision |
arxiv-668366 | cmp-lg/9406035 | DISCO---An HPSG-based NLP System and its Application for Appointment Scheduling (Project Note) | <|reference_start|>DISCO---An HPSG-based NLP System and its Application for Appointment Scheduling (Project Note): The natural language system DISCO is described. It combines o a powerful and flexible grammar development system; o linguistic competence for German including morphology, syntax and semantics; o new methods for linguistic performance modelling on the basis of high-level competence grammars; o new methods for modelling multi-agent dialogue competence; o an interesting sample application for appointment scheduling and calendar management.<|reference_end|> | arxiv | @article{uszkoreit1994disco---an,
title={DISCO---An HPSG-based NLP System and its Application for Appointment
Scheduling (Project Note)},
author={Hans Uszkoreit, Rolf Backofen, Stephan Busemann, Abdel Kader Diagne,
Elizabeth A. Hinkelman, Walter Kasper, Bernd Kiefer, Hans-Ulrich Krieger,
Klaus Netter, Guenter Neumann, Stephan Oepen, Stephen P. Spackman},
journal={arXiv preprint arXiv:cmp-lg/9406035},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406035},
primaryClass={cmp-lg cs.CL}
} | uszkoreit1994disco---an |
arxiv-668367 | cmp-lg/9406036 | Text Analysis Tools in Spoken Language Processing | <|reference_start|>Text Analysis Tools in Spoken Language Processing: This submission contains the postscript of the final version of the slides used in our ACL-94 tutorial.<|reference_end|> | arxiv | @article{riley1994text,
title={Text Analysis Tools in Spoken Language Processing},
author={Michael Riley, Richard Sproat (Linguistics Research Department, AT&T
Bell Laboratories, Murray Hill, NJ)},
journal={arXiv preprint arXiv:cmp-lg/9406036},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406036},
primaryClass={cmp-lg cs.CL}
} | riley1994text |
arxiv-668368 | cmp-lg/9406037 | Multi-Paragraph Segmentation of Expository Text | <|reference_start|>Multi-Paragraph Segmentation of Expository Text: This paper describes TextTiling, an algorithm for partitioning expository texts into coherent multi-paragraph discourse units which reflect the subtopic structure of the texts. The algorithm uses domain-independent lexical frequency and distribution information to recognize the interactions of multiple simultaneous themes. Two fully-implemented versions of the algorithm are described and shown to produce segmentation that corresponds well to human judgments of the major subtopic boundaries of thirteen lengthy texts.<|reference_end|> | arxiv | @article{hearst1994multi-paragraph,
title={Multi-Paragraph Segmentation of Expository Text},
author={Marti A. Hearst (UC Berkeley and Xerox PARC)},
journal={arXiv preprint arXiv:cmp-lg/9406037},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406037},
primaryClass={cmp-lg cs.CL}
} | hearst1994multi-paragraph |
arxiv-668369 | cmp-lg/9406038 | An Empirical Model of Acknowledgment for Spoken-Language Systems | <|reference_start|>An Empirical Model of Acknowledgment for Spoken-Language Systems: We refine and extend prior views of the description, purposes, and contexts-of-use of acknowledgment acts through empirical examination of the use of acknowledgments in task-based conversation. We distinguish three broad classes of acknowledgments (other-->ackn, self-->other-->ackn, and self+ackn) and present a catalogue of 13 patterns within these classes that account for the specific uses of acknowledgment in the corpus.<|reference_end|> | arxiv | @article{novick1994an,
title={An Empirical Model of Acknowledgment for Spoken-Language Systems},
author={David G. Novick (Oregon Graduate Institute), Stephen Sutton (Oregon
Graduate Institute)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406038},
primaryClass={cmp-lg cs.CL}
} | novick1994an |
arxiv-668370 | cmp-lg/9406039 | Three studies of grammar-based surface-syntactic parsing of unrestricted English text A summary and orientation | <|reference_start|>Three studies of grammar-based surface-syntactic parsing of unrestricted English text A summary and orientation: The dissertation addresses the design of parsing grammars for automatic surface-syntactic analysis of unconstrained English text. It consists of a summary and three articles. {\it Morphological disambiguation} documents a grammar for morphological (or part-of-speech) disambiguation of English, done within the Constraint Grammar framework proposed by Fred Karlsson. The disambiguator seeks to discard those of the alternative morphological analyses proposed by the lexical analyser that are contextually illegitimate. The 1,100 constraints express some 23 general, essentially syntactic statements as restrictions on the linear order of morphological tags. The error rate of the morphological disambiguator is about ten times smaller than that of another state-of-the-art probabilistic disambiguator, given that both are allowed to leave some of the hardest ambiguities unresolved. This accuracy suggests the viability of the grammar-based approach to natural language parsing, thus also contributing to the more general debate concerning the viability of probabilistic vs.\ linguistic techniques. {\it Experiments with heuristics} addresses the question of how to resolve those ambiguities that survive the morphological disambiguator. Two approaches are presented and empirically evaluated: (i) heuristic disambiguation constraints and (ii) techniques for learning from the fully disambiguated part of the corpus and then applying this information to resolving remaining ambiguities.<|reference_end|> | arxiv | @article{voutilainen1994three,
title={Three studies of grammar-based surface-syntactic parsing of unrestricted
English text. A summary and orientation},
author={Atro Voutilainen (Research Unit for Computational Linguistics,
University of Helsinki)},
journal={arXiv preprint arXiv:cmp-lg/9406039},
year={1994},
number={Publications 24, Department of General Linguistics, University of
Helsinki, 1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406039},
primaryClass={cmp-lg cs.CL}
} | voutilainen1994three |
arxiv-668371 | cmp-lg/9406040 | Learning unification-based grammars using the Spoken English Corpus | <|reference_start|>Learning unification-based grammars using the Spoken English Corpus: This paper describes a grammar learning system that combines model-based and data-driven learning within a single framework. Our results from learning grammars using the Spoken English Corpus (SEC) suggest that combined model-based and data-driven learning can produce a more plausible grammar than is the case when using either learning style isolation.<|reference_end|> | arxiv | @article{osborne1994learning,
title={Learning unification-based grammars using the Spoken English Corpus},
author={Miles Osborne (Dept. of Computer Science, University of York, York YO1
5DD, England) and Derek Bridge (Dept. of Computer Science, University of
York, York, YO1 5DD, England)},
journal={ICGI-94 Colloquium},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9406040},
primaryClass={cmp-lg cs.CL}
} | osborne1994learning |
arxiv-668372 | cmp-lg/9407001 | Morphology with a Null-Interface | <|reference_start|>Morphology with a Null-Interface: We present an integrated architecture for word-level and sentence-level processing in a unification-based paradigm. The core of the system is a CLP implementation of a unification engine for feature structures supporting relational values. In this framework an HPSG-style grammar is implemented. Word-level processing uses X2MorF, a morphological component based on an extended version of two-level morphology. This component is tightly integrated with the grammar as a relation. The advantage of this approach is that morphology and syntax are kept logically autonomous while at the same time minimizing interface problems.<|reference_end|> | arxiv | @article{trost1994morphology,
title={Morphology with a Null-Interface},
author={Harald Trost (Austrian Research Institute for Artificial
Intelligence), Johannes Matiasek (Austrian Research Institute for Artificial
Intelligence)},
journal={Proceedings of the 15th International Conference on Computational
Linguistics (COLING 94), Kyoto, Japan, August 1994, pp. 141-147},
year={1994},
number={OeFAI-TR-94-02},
archivePrefix={arXiv},
eprint={cmp-lg/9407001},
primaryClass={cmp-lg cs.CL}
} | trost1994morphology |
arxiv-668373 | cmp-lg/9407002 | Syntactic Analysis by Local Grammars Automata: an Efficient Algorithm | <|reference_start|>Syntactic Analysis by Local Grammars Automata: an Efficient Algorithm: Local grammars can be represented in a very convenient way by automata. This paper describes and illustrates an efficient algorithm for the application of local grammars put in this form to lemmatized texts.<|reference_end|> | arxiv | @article{mohri1994syntactic,
title={Syntactic Analysis by Local Grammars Automata: an Efficient Algorithm},
author={Mehryar Mohri (IGM-LADL, Paris, FRANCE)},
journal={arXiv preprint arXiv:cmp-lg/9407002},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407002},
primaryClass={cmp-lg cs.CL}
} | mohri1994syntactic |
arxiv-668374 | cmp-lg/9407003 | Compact Representations by Finite-State Transducers | <|reference_start|>Compact Representations by Finite-State Transducers: Finite-state transducers give efficient representations of many Natural Language phenomena. They allow to account for complex lexicon restrictions encountered, without involving the use of a large set of complex rules difficult to analyze. We here show that these representations can be made very compact, indicate how to perform the corresponding minimization, and point out interesting linguistic side-effects of this operation.<|reference_end|> | arxiv | @article{mohri1994compact,
title={Compact Representations by Finite-State Transducers},
author={Mehryar Mohri (IGM-Ladl, Paris, France)},
journal={arXiv preprint arXiv:cmp-lg/9407003},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407003},
primaryClass={cmp-lg cs.CL}
} | mohri1994compact |
arxiv-668375 | cmp-lg/9407004 | Japanese word sense disambiguation based on examples of synonyms | <|reference_start|>Japanese word sense disambiguation based on examples of synonyms: (This is not the abstract): The language is Japanese. If your printer does not have fonts for Japases characters, the characters in figures will not be printed out correctly. Dissertation for Bachelor's degree at Kyoto University(Nagao lab.),March 1994.<|reference_end|> | arxiv | @article{matsumoto1994japanese,
title={Japanese word sense disambiguation based on examples of synonyms},
author={Mitsutaka Matsumoto (Kyoto University)},
journal={arXiv preprint arXiv:cmp-lg/9407004},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407004},
primaryClass={cmp-lg cs.CL}
} | matsumoto1994japanese |
arxiv-668376 | cmp-lg/9407005 | A Corrective Training Algorithm for Adaptive Learning in Bag Generation | <|reference_start|>A Corrective Training Algorithm for Adaptive Learning in Bag Generation: The sampling problem in training corpus is one of the major sources of errors in corpus-based applications. This paper proposes a corrective training algorithm to best-fit the run-time context domain in the application of bag generation. It shows which objects to be adjusted and how to adjust their probabilities. The resulting techniques are greatly simplified and the experimental results demonstrate the promising effects of the training algorithm from generic domain to specific domain. In general, these techniques can be easily extended to various language models and corpus-based applications.<|reference_end|> | arxiv | @article{chen1994a,
title={A Corrective Training Algorithm for Adaptive Learning in Bag Generation},
author={Hsin-Hsi Chen (University of Taiwan) and Yue-Shi Lee (University of
Taiwan)},
journal={proceedings of NeMLaP-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407005},
primaryClass={cmp-lg cs.CL}
} | chen1994a |
arxiv-668377 | cmp-lg/9407006 | Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser | <|reference_start|>Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser: We describe an efficient bottom-up parser that interleaves syntactic and semantic structure building. Two techniques are presented for reducing search by reducing local ambiguity: Limited left-context constraints are used to reduce local syntactic ambiguity, and deferred sortal-constraint application is used to reduce local semantic ambiguity. We experimentally evaluate these techniques, and show dramatic reductions in both number of chart-edges and total parsing time. The robust processing capabilities of the parser are demonstrated in its use in improving the accuracy of a speech recognizer.<|reference_end|> | arxiv | @article{dowding1994interleaving,
title={Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser},
author={John Dowding, Robert Moore, Francois Andry, and Douglas Moran},
journal={32nd ACL, Las Cruces, New Mexico, June 1994, pp. 110-116},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407006},
primaryClass={cmp-lg cs.CL}
} | dowding1994interleaving |
arxiv-668378 | cmp-lg/9407007 | GEMINI: A Natural Language System for Spoken-Language Understanding | <|reference_start|>GEMINI: A Natural Language System for Spoken-Language Understanding: Gemini is a natural language understanding system developed for spoken language applications. The paper describes the architecture of Gemini, paying particular attention to resolving the tension between robustness and overgeneration. Gemini features a broad-coverage unification-based grammar of English, fully interleaved syntactic and semantic processing in an all-paths, bottom-up parser, and an utterance-level parser to find interpretations of sentences that might not be analyzable as complete sentences. Gemini also includes novel components for recognizing and correcting grammatical disfluencies, and for doing parse preferences. This paper presents a component-by-component view of Gemini, providing detailed relevant measurements of size, efficiency, and performance.<|reference_end|> | arxiv | @article{dowding1994gemini:,
title={GEMINI: A Natural Language System for Spoken-Language Understanding},
author={John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny,
Robert Moore, and Douglas Moran},
journal={appeared in 31st ACL, Columbus, Ohio, June 1993, pp. 54-61},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407007},
primaryClass={cmp-lg cs.CL}
} | dowding1994gemini: |
arxiv-668379 | cmp-lg/9407008 | Tricolor DAGs for Machine Translation | <|reference_start|>Tricolor DAGs for Machine Translation: Machine translation (MT) has recently been formulated in terms of constraint-based knowledge representation and unification theories, but it is becoming more and more evident that it is not possible to design a practical MT system without an adequate method of handling mismatches between semantic representations in the source and target languages. In this paper, we introduce the idea of ``information-based'' MT, which is considerably more flexible than interlingual MT or the conventional transfer-based MT.<|reference_end|> | arxiv | @article{takeda1994tricolor,
title={Tricolor DAGs for Machine Translation},
author={Koichi Takeda (IBM Research, Tokyo Research Lab.)},
journal={In Proceedings of ACL-94},
year={1994},
number={TRL-SPI-4030},
archivePrefix={arXiv},
eprint={cmp-lg/9407008},
primaryClass={cmp-lg cs.CL}
} | takeda1994tricolor |
arxiv-668380 | cmp-lg/9407009 | Estimating Performance of Pipelined Spoken Language Translation Systems | <|reference_start|>Estimating Performance of Pipelined Spoken Language Translation Systems: Most spoken language translation systems developed to date rely on a pipelined architecture, in which the main stages are speech recognition, linguistic analysis, transfer, generation and speech synthesis. When making projections of error rates for systems of this kind, it is natural to assume that the error rates for the individual components are independent, making the system accuracy the product of the component accuracies. The paper reports experiments carried out using the SRI-SICS-Telia Research Spoken Language Translator and a 1000-utterance sample of unseen data. The results suggest that the naive performance model leads to serious overestimates of system error rates, since there are in fact strong dependencies between the components. Predicting the system error rate on the independence assumption by simple multiplication resulted in a 16\% proportional overestimate for all utterances, and a 19\% overestimate when only utterances of length 1-10 words were considered.<|reference_end|> | arxiv | @article{rayner1994estimating,
title={Estimating Performance of Pipelined Spoken Language Translation Systems},
author={Manny Rayner (SRI International), David Carter (SRI International),
Patti Price (SRI International), and Bertil Lyberg (Telia Research AB)},
journal={arXiv preprint arXiv:cmp-lg/9407009},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407009},
primaryClass={cmp-lg cs.CL}
} | rayner1994estimating |
arxiv-668381 | cmp-lg/9407010 | Combining Knowledge Sources to Reorder N-Best Speech Hypothesis Lists | <|reference_start|>Combining Knowledge Sources to Reorder N-Best Speech Hypothesis Lists: A simple and general method is described that can combine different knowledge sources to reorder N-best lists of hypotheses produced by a speech recognizer. The method is automatically trainable, acquiring information from both positive and negative examples. Experiments are described in which it was tested on a 1000-utterance sample of unseen ATIS data.<|reference_end|> | arxiv | @article{rayner1994combining,
title={Combining Knowledge Sources to Reorder N-Best Speech Hypothesis Lists},
author={Manny Rayner (SRI International), David Carter (SRI International),
Vassilios Digalakis (SRI International) and Patti Price (SRI International)},
journal={arXiv preprint arXiv:cmp-lg/9407010},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407010},
primaryClass={cmp-lg cs.CL}
} | rayner1994combining |
arxiv-668382 | cmp-lg/9407011 | Discourse Obligations in Dialogue Processing | <|reference_start|>Discourse Obligations in Dialogue Processing: We show that in modeling social interaction, particularly dialogue, the attitude of obligation can be a useful adjunct to the popularly considered attitudes of belief, goal, and intention and their mutual and shared counterparts. In particular, we show how discourse obligations can be used to account in a natural manner for the connection between a question and its answer in dialogue and how obligations can be used along with other parts of the discourse context to extend the coverage of a dialogue system.<|reference_end|> | arxiv | @article{traum1994discourse,
title={Discourse Obligations in Dialogue Processing},
author={David R. Traum (University of Rochester) and James F. Allen
(University of Rochester)},
journal={In Proceedings of ACL-94},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407011},
primaryClass={cmp-lg cs.CL}
} | traum1994discourse |
arxiv-668383 | cmp-lg/9407012 | Phoneme Recognition Using Acoustic Events | <|reference_start|>Phoneme Recognition Using Acoustic Events: This paper presents a new approach to phoneme recognition using nonsequential sub--phoneme units. These units are called acoustic events and are phonologically meaningful as well as recognizable from speech signals. Acoustic events form a phonologically incomplete representation as compared to distinctive features. This problem may partly be overcome by incorporating phonological constraints. Currently, 24 binary events describing manner and place of articulation, vowel quality and voicing are used to recognize all German phonemes. Phoneme recognition in this paradigm consists of two steps: After the acoustic events have been determined from the speech signal, a phonological parser is used to generate syllable and phoneme hypotheses from the event lattice. Results obtained on a speaker--dependent corpus are presented.<|reference_end|> | arxiv | @article{huebener1994phoneme,
title={Phoneme Recognition Using Acoustic Events},
author={Kai Huebener (University of Hamburg, Germany), Julie Carson-Berndsen
(University of Bielefeld, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9407012},
year={1994},
number={Verbmobil-Report 15},
archivePrefix={arXiv},
eprint={cmp-lg/9407012},
primaryClass={cmp-lg cs.CL}
} | huebener1994phoneme |
arxiv-668384 | cmp-lg/9407013 | The Acquisition of a Lexicon from Paired Phoneme Sequences and Semantic Representations | <|reference_start|>The Acquisition of a Lexicon from Paired Phoneme Sequences and Semantic Representations: We present an algorithm that acquires words (pairings of phonological forms and semantic representations) from larger utterances of unsegmented phoneme sequences and semantic representations. The algorithm maintains from utterance to utterance only a single coherent dictionary, and learns in the presence of homonymy, synonymy, and noise. Test results over a corpus of utterances generated from the Childes database of mother-child interactions are presented.<|reference_end|> | arxiv | @article{de marcken1994the,
title={The Acquisition of a Lexicon from Paired Phoneme Sequences and Semantic
Representations},
author={Carl de Marcken (MIT Artificial Intelligence Laboratory)},
journal={arXiv preprint arXiv:cmp-lg/9407013},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407013},
primaryClass={cmp-lg cs.CL}
} | de marcken1994the |
arxiv-668385 | cmp-lg/9407014 | Abstract Machine for Typed Feature Structures | <|reference_start|>Abstract Machine for Typed Feature Structures: This paper describes a first step towards the definition of an abstract machine for linguistic formalisms that are based on typed feature structures, such as HPSG. The core design of the abstract machine is given in detail, including the compilation process from a high-level specification language to the abstract machine language and the implementation of the abstract instructions. We thus apply methods that were proved useful in computer science to the study of natural languages: a grammar specified using the formalism is endowed with an operational semantics. Currently, our machine supports the unification of simple feature structures, unification of sequences of such structures, cyclic structures and disjunction.<|reference_end|> | arxiv | @article{wintner1994abstract,
title={Abstract Machine for Typed Feature Structures},
author={Shuly Wintner (Technion, Israel Institute of Technology) and Nissim
Francez (Technion, Israel Institute of Technology)},
journal={arXiv preprint arXiv:cmp-lg/9407014},
year={1994},
number={TR #LCL 94-8, Laboratory for Computational Linguistics, Technion},
archivePrefix={arXiv},
eprint={cmp-lg/9407014},
primaryClass={cmp-lg cs.CL}
} | wintner1994abstract |
arxiv-668386 | cmp-lg/9407015 | Specifying Intonation from Context for Speech Synthesis | <|reference_start|>Specifying Intonation from Context for Speech Synthesis: This paper presents a theory and a computational implementation for generating prosodically appropriate synthetic speech in response to database queries. Proper distinctions of contrast and emphasis are expressed in an intonation contour that is synthesized by rule under the control of a grammar, a discourse model, and a knowledge base. The theory is based on Combinatory Categorial Grammar, a formalism which easily integrates the notions of syntactic constituency, semantics, prosodic phrasing and information structure. Results from our current implementation demonstrate the system's ability to generate a variety of intonational possibilities for a given sentence depending on the discourse context.<|reference_end|> | arxiv | @article{prevost1994specifying,
title={Specifying Intonation from Context for Speech Synthesis},
author={Scott Prevost (University of Pennsylvania), Mark Steedman (University
of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9407015},
year={1994},
number={MS-CIS-94-37/LINC LAB 273},
archivePrefix={arXiv},
eprint={cmp-lg/9407015},
primaryClass={cmp-lg cs.CL}
} | prevost1994specifying |
arxiv-668387 | cmp-lg/9407016 | The Role of Cognitive Modeling in Achieving Communicative Intentions | <|reference_start|>The Role of Cognitive Modeling in Achieving Communicative Intentions: A discourse planner for (task-oriented) dialogue must be able to make choices about whether relevant, but optional information (for example, the "satellites" in an RST-based planner) should be communicated. We claim that effective text planners must explicitly model aspects of the Hearer's cognitive state, such as what the hearer is attending to and what inferences the hearer can draw, in order to make these choices. We argue that a mere representation of the Hearer's knowledge is inadequate. We support this claim by (1) an analysis of naturally occurring dialogue, and (2) by simulating the generation of discourses in a situation in which we can vary the cognitive parameters of the hearer. Our results show that modeling cognitive state can lead to more effective discourses (measured with respect to a simple task).<|reference_end|> | arxiv | @article{walker1994the,
title={The Role of Cognitive Modeling in Achieving Communicative Intentions},
author={Marilyn Walker (Mitsubishi Electric Research Laboratories), Owen
Rambow (Universite Paris 7 / CoGenTex, Inc)},
journal={arXiv preprint arXiv:cmp-lg/9407016},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407016},
primaryClass={cmp-lg cs.CL}
} | walker1994the |
arxiv-668388 | cmp-lg/9407017 | Generating Context-Appropriate Word Orders in Turkish | <|reference_start|>Generating Context-Appropriate Word Orders in Turkish: Turkish has considerably freer word order than English. The interpretations of different word orders in Turkish rely on information that describes how a sentence relates to its discourse context. To capture the syntactic features of a free word order language, I present an adaptation of Combinatory Categorial Grammars called {}-CCGs (set-CCGs). In {}-CCGs, a verb's subcategorization requirements are relaxed so that it requires a set of arguments without specifying their linear order. I integrate a level of information structure, representing pragmatic functions such as topic and focus, with {}-CCGs to allow certain pragmatic distinctions in meaning to influence the word order of a sentence in a compositional way. Finally, I discuss how this strategy is used within an implemented generation system which produces Turkish sentences with context-appropriate word orders in a simple database query task.<|reference_end|> | arxiv | @article{hoffman1994generating,
title={Generating Context-Appropriate Word Orders in Turkish},
author={Beryl Hoffman (University of Pennsylvania)},
journal={arXiv preprint arXiv:cmp-lg/9407017},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407017},
primaryClass={cmp-lg cs.CL}
} | hoffman1994generating |
arxiv-668389 | cmp-lg/9407018 | Generating Multilingual Documents from a Knowledge Base: The TECHDOC Project | <|reference_start|>Generating Multilingual Documents from a Knowledge Base: The TECHDOC Project: TECHDOC is an implemented system demonstrating the feasibility of generating multilingual technical documents on the basis of a language-independent knowledge base. Its application domain is user and maintenance instructions, which are produced from underlying plan structures representing the activities, the participating objects with their properties, relations, and so on. This paper gives a brief outline of the system architecture and discusses some recent developments in the project: the addition of actual event simulation in the KB, steps towards a document authoring tool, and a multimodal user interface. (slightly corrected version of a paper to appear in: COLING 94, Proceedings)<|reference_end|> | arxiv | @article{rösner1994generating,
title={Generating Multilingual Documents from a Knowledge Base: The TECHDOC
Project},
author={Dietmar R"osner (FAW Ulm, Ulm, Germany), Manfred Stede (University of
Toronto and FAW Ulm)},
journal={arXiv preprint arXiv:cmp-lg/9407018},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407018},
primaryClass={cmp-lg cs.CL}
} | rösner1994generating |
arxiv-668390 | cmp-lg/9407019 | Tracking Point of View in Narrative | <|reference_start|>Tracking Point of View in Narrative: Third-person fictional narrative text is composed not only of passages that objectively narrate events, but also of passages that present characters' thoughts, perceptions, and inner states. Such passages take a character's ``psychological point of view''. A language understander must determine the current psychological point of view in order to distinguish the beliefs of the characters from the facts of the story, to correctly attribute beliefs and other attitudes to their sources, and to understand the discourse relations among sentences. Tracking the psychological point of view is not a trivial problem, because many sentences are not explicitly marked for point of view, and whether the point of view of a sentence is objective or that of a character (and if the latter, which character it is) often depends on the context in which the sentence appears. Tracking the psychological point of view is the problem addressed in this work. The approach is to seek, by extensive examinations of naturally-occurring narrative, regularities in the ways that authors manipulate point of view, and to develop an algorithm that tracks point of view on the basis of the regularities found. This paper presents this algorithm, gives demonstrations of an implemented system, and describes the results of some preliminary empirical studies, which lend support to the algorithm.<|reference_end|> | arxiv | @article{wiebe1994tracking,
title={Tracking Point of View in Narrative},
author={Janyce M. Wiebe (New Mexico State University)},
journal={Computational Lingustics 20:2, 233-287},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407019},
primaryClass={cmp-lg cs.CL}
} | wiebe1994tracking |
arxiv-668391 | cmp-lg/9407020 | A Sequential Algorithm for Training Text Classifiers | <|reference_start|>A Sequential Algorithm for Training Text Classifiers: The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.<|reference_end|> | arxiv | @article{lewis1994a,
title={A Sequential Algorithm for Training Text Classifiers},
author={David D. Lewis (AT&T Bell Labs) and William A. Gale (AT&T Bell Labs)},
journal={arXiv preprint arXiv:cmp-lg/9407020},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407020},
primaryClass={cmp-lg cs.CL}
} | lewis1994a |
arxiv-668392 | cmp-lg/9407021 | K-vec: A New Approach for Aligning Parallel Texts | <|reference_start|>K-vec: A New Approach for Aligning Parallel Texts: Various methods have been proposed for aligning texts in two or more languages such as the Canadian Parliamentary Debates(Hansards). Some of these methods generate a bilingual lexicon as a by-product. We present an alternative alignment strategy which we call K-vec, that starts by estimating the lexicon. For example, it discovers that the English word "fisheries" is similar to the French "pe^ches" by noting that the distribution of "fisheries" in the English text is similar to the distribution of "pe^ches" in the French. K-vec does not depend on sentence boundaries.<|reference_end|> | arxiv | @article{fung1994k-vec:,
title={K-vec: A New Approach for Aligning Parallel Texts},
author={Pascale Fung (Columbia University), Kenneth Church (AT&T Bell Labs)},
journal={arXiv preprint arXiv:cmp-lg/9407021},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407021},
primaryClass={cmp-lg cs.CL}
} | fung1994k-vec: |
arxiv-668393 | cmp-lg/9407022 | Comparative Discourse Analysis of Parallel Texts | <|reference_start|>Comparative Discourse Analysis of Parallel Texts: A quantitative representation of discourse structure can be computed by measuring lexical cohesion relations among adjacent blocks of text. These representations have been proposed to deal with sub-topic text segmentation. In a parallel corpus, similar representations can be derived for versions of a text in various languages. These can be used for parallel segmentation and as an alternative measure of text-translation similarity.<|reference_end|> | arxiv | @article{van der eijk1994comparative,
title={Comparative Discourse Analysis of Parallel Texts},
author={Pim van der Eijk (Digital Equipment Corporation)},
journal={arXiv preprint arXiv:cmp-lg/9407022},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407022},
primaryClass={cmp-lg cs.CL}
} | van der eijk1994comparative |
arxiv-668394 | cmp-lg/9407023 | Multi-Tape Two-Level Morphology: A Case Study in Semitic Non-linear Morphology | <|reference_start|>Multi-Tape Two-Level Morphology: A Case Study in Semitic Non-linear Morphology: This paper presents an implemented multi-tape two-level model capable of describing Semitic non-linear morphology. The computational framework behind the current work is motivated by Kay (1987); the formalism presented here is an extension to the formalism reported by Pulman and Hepple (1993). The objectives of the current work are: to stay as close as possible, in spirit, to standard two-level morphology, to stay close to the linguistic description of Semitic stems, and to present a model which can be used with ease by the Semitist. The paper illustrates that if finite-state transducers (FSTs) in a standard two-level morphology model are replaced with multi-tape auxiliary versions (AFSTs), one can account for Semitic root-and-pattern morphology using high level notation.<|reference_end|> | arxiv | @article{kiraz1994multi-tape,
title={Multi-Tape Two-Level Morphology: A Case Study in Semitic Non-linear
Morphology},
author={George Kiraz (University of Cambridge)},
journal={arXiv preprint arXiv:cmp-lg/9407023},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407023},
primaryClass={cmp-lg cs.CL}
} | kiraz1994multi-tape |
arxiv-668395 | cmp-lg/9407024 | PRINCIPAR---An Efficient, Broad-coverage, Principle-based Parser | <|reference_start|>PRINCIPAR---An Efficient, Broad-coverage, Principle-based Parser: We present an efficient, broad-coverage, principle-based parser for English. The parser has been implemented in C++ and runs on SUN Sparcstations with X-windows. It contains a lexicon with over 90,000 entries, constructed automatically by applying a set of extraction and conversion rules to entries from machine readable dictionaries.<|reference_end|> | arxiv | @article{lin1994principar---an,
title={PRINCIPAR---An Efficient, Broad-coverage, Principle-based Parser},
author={Dekang Lin (Department of Computer Science, University of Manitoba)},
journal={arXiv preprint arXiv:cmp-lg/9407024},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407024},
primaryClass={cmp-lg cs.CL}
} | lin1994principar---an |
arxiv-668396 | cmp-lg/9407025 | Recovering From Parser Failures: A Hybrid Statistical/Symbolic Approach | <|reference_start|>Recovering From Parser Failures: A Hybrid Statistical/Symbolic Approach: We describe an implementation of a hybrid statistical/symbolic approach to repairing parser failures in a speech-to-speech translation system. We describe a module which takes as input a fragmented parse and returns a repaired meaning representation. It negotiates with the speaker about what the complete meaning of the utterance is by generating hypotheses about how to fit the fragments of the partial parse together into a coherent meaning representation. By drawing upon both statistical and symbolic information, it constrains its repair hypotheses to those which are both likely and meaningful. Because it updates its statistical model during use, it improves its performance over time.<|reference_end|> | arxiv | @article{rose'1994recovering,
title={Recovering From Parser Failures: A Hybrid Statistical/Symbolic Approach},
author={Carolyn Penstein Rose' (Carnegie Mellon University), and Alex Waibel
(Carnegie Mellon University)},
journal={arXiv preprint arXiv:cmp-lg/9407025},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407025},
primaryClass={cmp-lg cs.CL}
} | rose'1994recovering |
arxiv-668397 | cmp-lg/9407026 | Tagging and Morphological Disambiguation of Turkish Text | <|reference_start|>Tagging and Morphological Disambiguation of Turkish Text: Automatic text tagging is an important component in higher level analysis of text corpora, and its output can be used in many natural language processing applications. In languages like Turkish or Finnish, with agglutinative morphology, morphological disambiguation is a very crucial process in tagging, as the structures of many lexical forms are morphologically ambiguous. This paper describes a POS tagger for Turkish text based on a full-scale two-level specification of Turkish morphology that is based on a lexicon of about 24,000 root words. This is augmented with a multi-word and idiomatic construct recognizer, and most importantly morphological disambiguator based on local neighborhood constraints, heuristics and limited amount of statistical information. The tagger also has functionality for statistics compilation and fine tuning of the morphological analyzer, such as logging erroneous morphological parses, commonly used roots, etc. Preliminary results indicate that the tagger can tag about 98-99\% of the texts accurately with very minimal user intervention. Furthermore for sentences morphologically disambiguated with the tagger, an LFG parser developed for Turkish, generates, on the average, 50\% less ambiguous parses and parses almost 2.5 times faster. The tagging functionality is not specific to Turkish, and can be applied to any language with a proper morphological analysis interface.<|reference_end|> | arxiv | @article{oflazer1994tagging,
title={Tagging and Morphological Disambiguation of Turkish Text},
author={Kemal Oflazer(Bilkent University, Ankara, Turkey), Ilker Kuruoz
(Bilkent University, Ankara, Turkey)},
journal={arXiv preprint arXiv:cmp-lg/9407026},
year={1994},
number={Bilkent University CS Dept. Tech Report NO: BU-CEIS-9416},
archivePrefix={arXiv},
eprint={cmp-lg/9407026},
primaryClass={cmp-lg cs.CL}
} | oflazer1994tagging |
arxiv-668398 | cmp-lg/9407027 | Parsing as Tree Traversal | <|reference_start|>Parsing as Tree Traversal: This paper presents a unified approach to parsing, in which top-down, bottom-up and left-corner parsers are related to preorder, postorder and inorder tree traversals. It is shown that the simplest bottom-up and left-corner parsers are left recursive and must be converted using an extended Greibach normal form. With further partial execution, the bottom-up and left-corner parsers collapse together as in the BUP parser of Matsumoto.<|reference_end|> | arxiv | @article{gerdemann1994parsing,
title={Parsing as Tree Traversal},
author={Dale Gerdemann (University of Tuebingen, Germany)},
journal={arXiv preprint arXiv:cmp-lg/9407027},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407027},
primaryClass={cmp-lg cs.CL}
} | gerdemann1994parsing |
arxiv-668399 | cmp-lg/9407028 | Automated Postediting of Documents | <|reference_start|>Automated Postediting of Documents: Large amounts of low- to medium-quality English texts are now being produced by machine translation (MT) systems, optical character readers (OCR), and non-native speakers of English. Most of this text must be postedited by hand before it sees the light of day. Improving text quality is tedious work, but its automation has not received much research attention. Anyone who has postedited a technical report or thesis written by a non-native speaker of English knows the potential of an automated postediting system. For the case of MT-generated text, we argue for the construction of postediting modules that are portable across MT systems, as an alternative to hardcoding improvements inside any one system. As an example, we have built a complete self-contained postediting module for the task of article selection (a, an, the) for English noun phrases. This is a notoriously difficult problem for Japanese-English MT. Our system contains over 200,000 rules derived automatically from online text resources. We report on learning algorithms, accuracy, and comparisons with human performance.<|reference_end|> | arxiv | @article{knight1994automated,
title={Automated Postediting of Documents},
author={Kevin Knight (USC/Information Sciences Institute) and Ishwar Chander
(USC/Information Sciences Institute)},
journal={arXiv preprint arXiv:cmp-lg/9407028},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407028},
primaryClass={cmp-lg cs.CL}
} | knight1994automated |
arxiv-668400 | cmp-lg/9407029 | Building a Large-Scale Knowledge Base for Machine Translation | <|reference_start|>Building a Large-Scale Knowledge Base for Machine Translation: Knowledge-based machine translation (KBMT) systems have achieved excellent results in constrained domains, but have not yet scaled up to newspaper text. The reason is that knowledge resources (lexicons, grammar rules, world models) must be painstakingly handcrafted from scratch. One of the hypotheses being tested in the PANGLOSS machine translation project is whether or not these resources can be semi-automatically acquired on a very large scale. This paper focuses on the construction of a large ontology (or knowledge base, or world model) for supporting KBMT. It contains representations for some 70,000 commonly encountered objects, processes, qualities, and relations. The ontology was constructed by merging various online dictionaries, semantic networks, and bilingual resources, through semi-automatic methods. Some of these methods (e.g., conceptual matching of semantic taxonomies) are broadly applicable to problems of importing/exporting knowledge from one KB to another. Other methods (e.g., bilingual matching) allow a knowledge engineer to build up an index to a KB in a second language, such as Spanish or Japanese.<|reference_end|> | arxiv | @article{knight1994building,
title={Building a Large-Scale Knowledge Base for Machine Translation},
author={Kevin Knight (USC/Information Sciences Institute) and Steve K. Luk
(USC/Information Sciences Institute)},
journal={arXiv preprint arXiv:cmp-lg/9407029},
year={1994},
archivePrefix={arXiv},
eprint={cmp-lg/9407029},
primaryClass={cmp-lg cs.CL}
} | knight1994building |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.