corpus_id
stringlengths 7
12
| paper_id
stringlengths 9
16
| title
stringlengths 1
261
| abstract
stringlengths 70
4.02k
| source
stringclasses 1
value | bibtex
stringlengths 208
20.9k
| citation_key
stringlengths 6
100
|
---|---|---|---|---|---|---|
arxiv-676301 | cs/9904009 | An ascription-based approach to speech acts | <|reference_start|>An ascription-based approach to speech acts: The two principal areas of natural language processing research in pragmatics are belief modelling and speech act processing. Belief modelling is the development of techniques to represent the mental attitudes of a dialogue participant. The latter approach, speech act processing, based on speech act theory, involves viewing dialogue in planning terms. Utterances in a dialogue are modelled as steps in a plan where understanding an utterance involves deriving the complete plan a speaker is attempting to achieve. However, previous speech act based approaches have been limited by a reliance upon relatively simplistic belief modelling techniques and their relationship to planning and plan recognition. In particular, such techniques assume precomputed nested belief structures. In this paper, we will present an approach to speech act processing based on novel belief modelling techniques where nested beliefs are propagated on demand.<|reference_end|> | arxiv | @article{lee1999an,
title={An ascription-based approach to speech acts},
author={Mark Lee and Yorick Wilks},
journal={Proceedings of COLING`96, Copenhagen. (1996)},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904009},
primaryClass={cs.CL}
} | lee1999an |
arxiv-676302 | cs/9904010 | Beyond Concern: Understanding Net Users' Attitudes About Online Privacy | <|reference_start|>Beyond Concern: Understanding Net Users' Attitudes About Online Privacy: People are concerned about privacy, particularly on the Internet. While many studies have provided evidence of this concern, few have explored the nature of the concern in detail, especially for the online environment. With this study, we have tried to better understand the nature of online privacy concerns; we look beyond the fact that people are concerned and attempt to understand how they are concerned. We hope our results will help inform both policy decisions as well as the development of technology tools that can assist Internet users in protecting their privacy. We present results here from the analysis of 381 questionnaires completed between November 6 and November 13, 1998 by American Internet users. The sample was drawn from the FamilyPC magazine/Digital Research, Inc. Family Panel. While this is not a statistically representative sample of US Internet users, our respondents are heavy Internet users, and quite possibly lead innovators. As such, we believe that this sample is important for understanding the future Internet user population.<|reference_end|> | arxiv | @article{cranor1999beyond,
title={Beyond Concern: Understanding Net Users' Attitudes About Online Privacy},
author={Lorrie Faith Cranor, Joseph Reagle, and Mark S. Ackerman},
journal={arXiv preprint arXiv:cs/9904010},
year={1999},
number={AT&T Labs-Research Technical Report TR 99.4.3},
archivePrefix={arXiv},
eprint={cs/9904010},
primaryClass={cs.CY cs.HC}
} | cranor1999beyond |
arxiv-676303 | cs/9904011 | WebScript -- A Scripting Language for the Web | <|reference_start|>WebScript -- A Scripting Language for the Web: WebScript is a scripting language for processing Web documents. Designed as an extension to Jacl, the Java implementation of Tcl, WebScript allows programmers to manipulate HTML in the same way as Tcl manipulates text strings and GUI elements. This leads to a completely new way of writing the next generation of Web applications. This paper presents the motivation behind the design and implementation of WebScript, an overview of its major features, as well as some demonstrations of its power.<|reference_end|> | arxiv | @article{zhang1999webscript,
title={WebScript -- A Scripting Language for the Web},
author={Yin Zhang},
journal={arXiv preprint arXiv:cs/9904011},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904011},
primaryClass={cs.NI cs.PL}
} | zhang1999webscript |
arxiv-676304 | cs/9904012 | Active Virtual Network Management Protocol | <|reference_start|>Active Virtual Network Management Protocol: This paper introduces a novel algorithm, the Active Virtual Network Management Protocol (AVNMP), for predictive network management. It explains how the AVNMP facilitates the management of an active network by allowing future predicted state information within an active network to be available to network management algorithms. This is accomplished by coupling ideas from optimistic discrete event simulation with active networking. The optimistic discrete event simulation method used is a form of self-adjusting Time Warp. It is self-adjusting because the system adjusts for predictions which are inaccurate beyond a given tolerance. The concept of a streptichron and autoanaplasis are introduced as mechanisms which take advantage of the enhanced flexibility and intelligence of active packets. Finally, it is demonstrated that the AVNMP is a feasible concept.<|reference_end|> | arxiv | @article{bush1999active,
title={Active Virtual Network Management Protocol},
author={Stephen F. Bush},
journal={arXiv preprint arXiv:cs/9904012},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904012},
primaryClass={cs.NI}
} | bush1999active |
arxiv-676305 | cs/9904013 | Network Management of Predictive Mobile Networks | <|reference_start|>Network Management of Predictive Mobile Networks: There is a trend toward the use of predictive systems in communications networks. At the systems and network management level predictive capabilities are focused on anticipating network faults and performance degradation. Simultaneously, mobile communication networks are being developed with predictive location and tracking mechanisms. The interactions and synergies between these systems present a new set of problems. A new predictive network management framework is developed and examined. The interaction between a predictive mobile network and the proposed network management system is discussed. The Rapidly Deployable Radio Network is used as a specific example to illustrate these interactions.<|reference_end|> | arxiv | @article{bush1999network,
title={Network Management of Predictive Mobile Networks},
author={Stephen F. Bush and Victor S. Frost and Joseph B. Evans},
journal={Journal of Network and Systems Management, volume 7, number 2,
June, 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904013},
primaryClass={cs.NI}
} | bush1999network |
arxiv-676306 | cs/9904014 | A Control and Management Network for Wireless ATM Systems | <|reference_start|>A Control and Management Network for Wireless ATM Systems: This paper describes the design of a control and management network (orderwire) for a mobile wireless Asynchronous Transfer Mode (ATM) network. This mobile wireless ATM network is part of the Rapidly Deployable Radio Network (RDRN). The orderwire system consists of a packet radio network which overlays the mobile wireless ATM network, each network element in this network uses Global Positioning System (GPS) information to control a beamforming antenna subsystem which provides for spatial reuse. This paper also proposes a novel Virtual Network Configuration (VNC) algorithm for predictive network configuration. A mobile ATM Private Network-Network Interface (PNNI) based on VNC is also discussed. Finally, as a prelude to the system implementation, results of a Maisie simulation of the orderwire system are discussed.<|reference_end|> | arxiv | @article{bush1999a,
title={A Control and Management Network for Wireless ATM Systems},
author={Stephen F. Bush and Sunil Jagannath and Joseph B. Evans and Victor
Frost and Gary Minden and K. Sam Shanmugan},
journal={ACM-Baltzer Wireless Networks (WINET), volume 3, pages
267-283,1997},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904014},
primaryClass={cs.NI}
} | bush1999a |
arxiv-676307 | cs/9904015 | Mobile ATM Buffer Capacity Analysis | <|reference_start|>Mobile ATM Buffer Capacity Analysis: This paper extends a stochastic theory for buffer fill distribution for multiple ``on'' and ``off'' sources to a mobile environment. Queue fill distribution is described by a set of differential equations assuming sources alternate asynchronously between exponentially distributed periods in ``on'' and ``off'' states. This paper includes the probabilities that mobile sources have links to a given queue. The sources represent mobile user nodes, and the queue represents the capacity of a switch. This paper presents a method of analysis which uses mobile parameters such as speed, call rates per unit area, cell area, and call duration and determines queue fill distribution at the ATM cell level. The analytic results are compared with simulation results.<|reference_end|> | arxiv | @article{bush1999mobile,
title={Mobile ATM Buffer Capacity Analysis},
author={Stephen F. Bush and Sunil Jagannath and Joseph B. Evans and Victor
Frost},
journal={ACM-Baltzer Mobile Networks and Nomadic Applications (NOMAD),1996,
volume 1, number 1, pages 67-73, February},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904015},
primaryClass={cs.NI}
} | bush1999mobile |
arxiv-676308 | cs/9904016 | Brittle System Analysis | <|reference_start|>Brittle System Analysis: The goal of this paper is to define and analyze systems which exhibit brittle behavior. This behavior is characterized by a sudden and steep decline in performance as the system approaches the limits of tolerance. This can be due to input parameters which exceed a specified input, or environmental conditions which exceed specified operating boundaries. An analogy is made between brittle commmunication systems in particular and materials science.<|reference_end|> | arxiv | @article{bush1999brittle,
title={Brittle System Analysis},
author={Stephen F. Bush, John Hershey and Kirby Vosburgh},
journal={arXiv preprint arXiv:cs/9904016},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904016},
primaryClass={cs.NI cs.CC cs.GL cs.PF}
} | bush1999brittle |
arxiv-676309 | cs/9904017 | A Machine-Independent Debugger--Revisited | <|reference_start|>A Machine-Independent Debugger--Revisited: Most debuggers are notoriously machine-dependent, but some recent research prototypes achieve varying degrees of machine-independence with novel designs. Cdb, a simple source-level debugger for C, is completely independent of its target architecture. This independence is achieved by embedding symbol tables and debugging code in the target program, which costs both time and space. This paper describes a revised design and implementation of cdb that reduces the space cost by nearly one-half and the time cost by 13% by storing symbol tables in external files. A symbol table is defined by a 31-line grammar in the Abstract Syntax Description Language (ASDL). ASDL is a domain-specific language for specifying tree data structures. The ASDL tools accept an ASDL grammar and generate code to construct, read, and write these data structures. Using ASDL automates implementing parts of the debugger, and the grammar documents the symbol table concisely. Using ASDL also suggested simplifications to the interface between the debugger and the target program. Perhaps most important, ASDL emphasizes that symbol tables are data structures, not file formats. Many of the pitfalls of working with low-level file formats can be avoided by focusing instead on high-level data structures and automating the implementation details.<|reference_end|> | arxiv | @article{hanson1999a,
title={A Machine-Independent Debugger--Revisited},
author={David R. Hanson},
journal={Software--Practice & Experience, vol. 29, no. 10, 849-862, Aug.
1999},
year={1999},
number={Microsoft Research MSR-TR-99-04},
archivePrefix={arXiv},
eprint={cs/9904017},
primaryClass={cs.PL cs.SE}
} | hanson1999a |
arxiv-676310 | cs/9904018 | A Computational Memory and Processing Model for Processing for Prosody | <|reference_start|>A Computational Memory and Processing Model for Processing for Prosody: This paper links prosody to the information in a text and how it is processed by the speaker. It describes the operation and output of LOQ, a text-to-speech implementation that includes a model of limited attention and working memory. Attentional limitations are key. Varying the attentional parameter in the simulations varies in turn what counts as given and new in a text, and therefore, the intonational contours with which it is uttered. Currently, the system produces prosody in three different styles: child-like, adult expressive, and knowledgeable. This prosody also exhibits differences within each style -- no two simulations are alike. The limited resource approach captures some of the stylistic and individual variety found in natural prosody.<|reference_end|> | arxiv | @article{cahn1999a,
title={A Computational Memory and Processing Model for Processing for Prosody},
author={Janet E. Cahn},
journal={arXiv preprint arXiv:cs/9904018},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904018},
primaryClass={cs.CL}
} | cahn1999a |
arxiv-676311 | cs/9904019 | Bounds for Small-Error and Zero-Error Quantum Algorithms | <|reference_start|>Bounds for Small-Error and Zero-Error Quantum Algorithms: We present a number of results related to quantum algorithms with small error probability and quantum algorithms that are zero-error. First, we give a tight analysis of the trade-offs between the number of queries of quantum search algorithms, their error probability, the size of the search space, and the number of solutions in this space. Using this, we deduce new lower and upper bounds for quantum versions of amplification problems. Next, we establish nearly optimal quantum-classical separations for the query complexity of monotone functions in the zero-error model (where our quantum zero-error model is defined so as to be robust when the quantum gates are noisy). Also, we present a communication complexity problem related to a total function for which there is a quantum-classical communication complexity gap in the zero-error model. Finally, we prove separations for monotone graph properties in the zero-error and other error models which imply that the evasiveness conjecture for such properties does not hold for quantum computers.<|reference_end|> | arxiv | @article{buhrman1999bounds,
title={Bounds for Small-Error and Zero-Error Quantum Algorithms},
author={H. Buhrman (CWI), R. Cleve (U.Calgary), R. de Wolf (CWI and
U.Amsterdam), and Ch. Zalka (LANL)},
journal={arXiv preprint arXiv:cs/9904019},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904019},
primaryClass={cs.CC quant-ph}
} | buhrman1999bounds |
arxiv-676312 | cs/9904020 | ODP channel objects that provide services transparently for distributing processing systems | <|reference_start|>ODP channel objects that provide services transparently for distributing processing systems: This paper describes an architecture for a distributing processing system that would allow remote procedure calls to invoke other services as messages are passed between clients and servers. It proposes that an additional class of data processing objects be located in the software communications channel. The objects in this channel would then be used to enforce protocols on client-server applications without any additional effort by the application programmers. For example, services such as key-management, time-stamping, sequencing and encryption can be implemented at different levels of the software communications stack to provide a complete authentication service. A distributing processing environment could be used to control broadband network data delivery. Architectures and invocation semantics are discussed, Example classes and interfaces for channel objects are given in the Java programming language.<|reference_end|> | arxiv | @article{eaves1999odp,
title={ODP channel objects that provide services transparently for distributing
processing systems},
author={Walter Eaves},
journal={arXiv preprint arXiv:cs/9904020},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904020},
primaryClass={cs.DC cs.OS}
} | eaves1999odp |
arxiv-676313 | cs/9904021 | Hadamard product nonlinear formulation of Galerkin and finite element methods | <|reference_start|>Hadamard product nonlinear formulation of Galerkin and finite element methods: A novel nonlinear formulation of the finite element and Galerkin methods is presented here, which leads to the Hadamard product expression of the resultant nonlinear algebraic analogue. The presented formulation attains the advantages of weak formulation in the standard finite element and Galerkin schemes and avoids the costly repeated numerical integration of the Jacobian matrix via the recently developed SJT product approach. This also provides possibility of the nonlinear decoupling computations.<|reference_end|> | arxiv | @article{chen1999hadamard,
title={Hadamard product nonlinear formulation of Galerkin and finite element
methods},
author={W. Chen},
journal={arXiv preprint arXiv:cs/9904021},
year={1999},
archivePrefix={arXiv},
eprint={cs/9904021},
primaryClass={cs.CE cs.NA math.NA}
} | chen1999hadamard |
arxiv-676314 | cs/9905001 | Supervised Grammar Induction Using Training Data with Limited Constituent Information | <|reference_start|>Supervised Grammar Induction Using Training Data with Limited Constituent Information: Corpus-based grammar induction generally relies on hand-parsed training data to learn the structure of the language. Unfortunately, the cost of building large annotated corpora is prohibitively expensive. This work aims to improve the induction strategy when there are few labels in the training data. We show that the most informative linguistic constituents are the higher nodes in the parse trees, typically denoting complex noun phrases and sentential clauses. They account for only 20% of all constituents. For inducing grammars from sparsely labeled training data (e.g., only higher-level constituent labels), we propose an adaptation strategy, which produces grammars that parse almost as well as grammars induced from fully labeled corpora. Our results suggest that for a partial parser to replace human annotators, it must be able to automatically extract higher-level constituents rather than base noun phrases.<|reference_end|> | arxiv | @article{hwa1999supervised,
title={Supervised Grammar Induction Using Training Data with Limited
Constituent Information},
author={Rebecca Hwa},
journal={arXiv preprint arXiv:cs/9905001},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905001},
primaryClass={cs.CL}
} | hwa1999supervised |
arxiv-676315 | cs/9905002 | DRAFT : Task System and Item Architecture (TSIA) | <|reference_start|>DRAFT : Task System and Item Architecture (TSIA): During its execution, a task is independent of all other tasks. For an application which executes in terms of tasks, the application definition can be free of the details of the execution. Many projects have demonstrated that a task system (TS) can provide such an application with a parallel, distributed, heterogeneous, adaptive, dynamic, real-time, interactive, reliable, secure or other execution. A task consists of items and thus the application is defined in terms of items. An item architecture (IA) can support arrays, routines and other structures of items, thus allowing for a structured application definition. Taking properties from many projects, the support can extend through to currying, application defined types, conditional items, streams and other definition elements. A task system and item architecture (TSIA) thus promises unprecedented levels of support for application execution and definition.<|reference_end|> | arxiv | @article{burow1999draft,
title={DRAFT : Task System and Item Architecture (TSIA)},
author={Burkhard D. Burow},
journal={arXiv preprint arXiv:cs/9905002},
year={1999},
number={DESY 99-066},
archivePrefix={arXiv},
eprint={cs/9905002},
primaryClass={cs.PL cs.DC cs.OS}
} | burow1999draft |
arxiv-676316 | cs/9905003 | Collective Choice Theory in Collaborative Computing | <|reference_start|>Collective Choice Theory in Collaborative Computing: This paper presents some fundamental collective choice theory for information system designers, particularly those working in the field of computer-supported cooperative work. This paper is focused on a presentation of Arrow's Possibility and Impossibility theorems which form the fundamental boundary on the efficacy of collective choice: voting and selection procedures. It restates the conditions that Arrow placed on collective choice functions in more rigorous second-order logic, which could be used as a set of test conditions for implementations, and a useful probabilistic result for analyzing votes on issue pairs. It also describes some simple collective choice functions. There is also some discussion of how enterprises should approach putting their resources under collective control: giving an outline of a superstructure of performative agents to carry out this function and what distributing processing technology would be needed.<|reference_end|> | arxiv | @article{eaves1999collective,
title={Collective Choice Theory in Collaborative Computing},
author={Walter Eaves},
journal={arXiv preprint arXiv:cs/9905003},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905003},
primaryClass={cs.MA cs.DC}
} | eaves1999collective |
arxiv-676317 | cs/9905004 | Using Collective Intelligence to Route Internet Traffic | <|reference_start|>Using Collective Intelligence to Route Internet Traffic: A COllective INtelligence (COIN) is a set of interacting reinforcement learning (RL) algorithms designed in an automated fashion so that their collective behavior optimizes a global utility function. We summarize the theory of COINs, then present experiments using that theory to design COINs to control internet traffic routing. These experiments indicate that COINs outperform all previously investigated RL-based, shortest path routing algorithms.<|reference_end|> | arxiv | @article{wolpert1999using,
title={Using Collective Intelligence to Route Internet Traffic},
author={David H. Wolpert, Kagan Tumer, Jeremy Frank},
journal={Advances in Information Processing Systems - 11, eds M. Kearns, S.
Solla, D. Cohn, MIT Press, 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905004},
primaryClass={cs.LG adap-org cond-mat.stat-mech cs.DC cs.NI nlin.AO}
} | wolpert1999using |
arxiv-676318 | cs/9905005 | General Principles of Learning-Based Multi-Agent Systems | <|reference_start|>General Principles of Learning-Based Multi-Agent Systems: We consider the problem of how to design large decentralized multi-agent systems (MAS's) in an automated fashion, with little or no hand-tuning. Our approach has each agent run a reinforcement learning algorithm. This converts the problem into one of how to automatically set/update the reward functions for each of the agents so that the global goal is achieved. In particular we do not want the agents to ``work at cross-purposes'' as far as the global goal is concerned. We use the term artificial COllective INtelligence (COIN) to refer to systems that embody solutions to this problem. In this paper we present a summary of a mathematical framework for COINs. We then investigate the real-world applicability of the core concepts of that framework via two computer experiments: we show that our COINs perform near optimally in a difficult variant of Arthur's bar problem (and in particular avoid the tragedy of the commons for that problem), and we also illustrate optimal performance for our COINs in the leader-follower problem.<|reference_end|> | arxiv | @article{wolpert1999general,
title={General Principles of Learning-Based Multi-Agent Systems},
author={David H. Wolpert, Kevin R. Wheeler, Kagan Tumer},
journal={Proceedings of the Third International Conference on Autonomous
Agents, Seatle, WA 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905005},
primaryClass={cs.MA adap-org cond-mat.stat-mech cs.DC cs.LG nlin.AO}
} | wolpert1999general |
arxiv-676319 | cs/9905006 | The Design and Analysis of Virtual Network Configuration for a Wireless Mobile ATM Network | <|reference_start|>The Design and Analysis of Virtual Network Configuration for a Wireless Mobile ATM Network: This research concentrates on the design and analysis of an algorithm referred to as Virtual Network Configuration (VNC) which uses predicted future states of a system for faster network configuration and management. VNC is applied to the configuration of a wireless mobile ATM network. VNC is built on techniques from parallel discrete event simulation merged with constraints from real-time systems and applied to mobile ATM configuration and handoff. Configuration in a mobile network is a dynamic and continuous process. Factors such as load, distance, capacity and topology are all constantly changing in a mobile environment. The VNC algorithm anticipates configuration changes and speeds the reconfiguration process by pre-computing and caching results. VNC propagates local prediction results throughout the VNC enhanced system. The Global Positioning System is an enabling technology for the use of VNC in mobile networks because it provides location information and accurate time for each node. This research has resulted in well defined structures for the encapsulation of physical processes within Logical Processes and a generic library for enhancing a system with VNC. Enhancing an existing system with VNC is straight forward assuming the existing physical processes do not have side effects. The benefit of prediction is gained at the cost of additional traffic and processing. This research includes an analysis of VNC and suggestions for optimization of the VNC algorithm and its parameters.<|reference_end|> | arxiv | @article{bush1999the,
title={The Design and Analysis of Virtual Network Configuration for a Wireless
Mobile ATM Network},
author={Stephen F. Bush},
journal={arXiv preprint arXiv:cs/9905006},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905006},
primaryClass={cs.NI}
} | bush1999the |
arxiv-676320 | cs/9905007 | An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery | <|reference_start|>An Efficient, Probabilistically Sound Algorithm for Segmentation and Word Discovery: This paper presents a model-based, unsupervised algorithm for recovering word boundaries in a natural-language text from which they have been deleted. The algorithm is derived from a probability model of the source that generated the text. The fundamental structure of the model is specified abstractly so that the detailed component models of phonology, word-order, and word frequency can be replaced in a modular fashion. The model yields a language-independent, prior probability distribution on all possible sequences of all possible words over a given alphabet, based on the assumption that the input was generated by concatenating words from a fixed but unknown lexicon. The model is unusual in that it treats the generation of a complete corpus, regardless of length, as a single event in the probability space. Accordingly, the algorithm does not estimate a probability distribution on words; instead, it attempts to calculate the prior probabilities of various word sequences that could underlie the observed text. Experiments on phonemic transcripts of spontaneous speech by parents to young children suggest that this algorithm is more effective than other proposed algorithms, at least when utterance boundaries are given and the text includes a substantial number of short utterances. Keywords: Bayesian grammar induction, probability models, minimum description length (MDL), unsupervised learning, cognitive modeling, language acquisition, segmentation<|reference_end|> | arxiv | @article{brent1999an,
title={An Efficient, Probabilistically Sound Algorithm for Segmentation and
Word Discovery},
author={Michael R. Brent},
journal={Brent, M. R. (1999). An efficient, probabilistically sound
algorithm for segmentation and word discovery. Machine Learning 34, 71-105},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905007},
primaryClass={cs.CL cs.LG}
} | brent1999an |
arxiv-676321 | cs/9905008 | Inducing a Semantically Annotated Lexicon via EM-Based Clustering | <|reference_start|>Inducing a Semantically Annotated Lexicon via EM-Based Clustering: We present a technique for automatic induction of slot annotations for subcategorization frames, based on induction of hidden classes in the EM framework of statistical estimation. The models are empirically evalutated by a general decision test. Induction of slot labeling for subcategorization frames is accomplished by a further application of EM, and applied experimentally on frame observations derived from parsing large corpora. We outline an interpretation of the learned representations as theoretical-linguistic decompositional lexical entries.<|reference_end|> | arxiv | @article{rooth1999inducing,
title={Inducing a Semantically Annotated Lexicon via EM-Based Clustering},
author={Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz
Beil (IMS, University of Stuttgart)},
journal={arXiv preprint arXiv:cs/9905008},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905008},
primaryClass={cs.CL cs.AI cs.LG}
} | rooth1999inducing |
arxiv-676322 | cs/9905009 | Inside-Outside Estimation of a Lexicalized PCFG for German | <|reference_start|>Inside-Outside Estimation of a Lexicalized PCFG for German: The paper describes an extensive experiment in inside-outside estimation of a lexicalized probabilistic context free grammar for German verb-final clauses. Grammar and formalism features which make the experiment feasible are described. Successive models are evaluated on precision and recall of phrase markup.<|reference_end|> | arxiv | @article{beil1999inside-outside,
title={Inside-Outside Estimation of a Lexicalized PCFG for German},
author={Franz Beil, Glenn Carroll, Detlef Prescher, Stefan Riezler, and Mats
Rooth (IMS, University of Stuttgart)},
journal={arXiv preprint arXiv:cs/9905009},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905009},
primaryClass={cs.CL cs.LG}
} | beil1999inside-outside |
arxiv-676323 | cs/9905010 | Statistical Inference and Probabilistic Modelling for Constraint-Based NLP | <|reference_start|>Statistical Inference and Probabilistic Modelling for Constraint-Based NLP: We present a probabilistic model for constraint-based grammars and a method for estimating the parameters of such models from incomplete, i.e., unparsed data. Whereas methods exist to estimate the parameters of probabilistic context-free grammars from incomplete data (Baum 1970), so far for probabilistic grammars involving context-dependencies only parameter estimation techniques from complete, i.e., fully parsed data have been presented (Abney 1997). However, complete-data estimation requires labor-intensive, error-prone, and grammar-specific hand-annotating of large language corpora. We present a log-linear probability model for constraint logic programming, and a general algorithm to estimate the parameters of such models from incomplete data by extending the estimation algorithm of Della-Pietra, Della-Pietra, and Lafferty (1997) to incomplete data settings.<|reference_end|> | arxiv | @article{riezler1999statistical,
title={Statistical Inference and Probabilistic Modelling for Constraint-Based
NLP},
author={Stefan Riezler (IMS, University of Stuttgart)},
journal={arXiv preprint arXiv:cs/9905010},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905010},
primaryClass={cs.CL cs.LG}
} | riezler1999statistical |
arxiv-676324 | cs/9905011 | Ensembles of Radial Basis Function Networks for Spectroscopic Detection of Cervical Pre-Cancer | <|reference_start|>Ensembles of Radial Basis Function Networks for Spectroscopic Detection of Cervical Pre-Cancer: The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, non-invasively and quantitatively probes the biochemical and morphological changes that occur in pre-cancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337, 380 and 460 nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from pre-cancerous tissue samples. The use of connectionist methods such as multi layered perceptrons, radial basis function networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated, and near real-time implementation of pre-cancer detection in the hands of non-experts. The results are more reliable, direct and accurate than those achieved by either human experts or multivariate statistical algorithms.<|reference_end|> | arxiv | @article{tumer1999ensembles,
title={Ensembles of Radial Basis Function Networks for Spectroscopic Detection
of Cervical Pre-Cancer},
author={Kagan Tumer, Nirmala Ramanujam, Joydeep Ghosh, and Rebecca
Richards-Kortum},
journal={IEEE Transactions on Biomedical Engineering, vol 45, no. 8, pp
953-962, 1998},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905011},
primaryClass={cs.NE cs.LG q-bio}
} | tumer1999ensembles |
arxiv-676325 | cs/9905012 | Linear and Order Statistics Combiners for Pattern Classification | <|reference_start|>Linear and Order Statistics Combiners for Pattern Classification: Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the "added" error. If N unbiased classifiers are combined by simple averaging, the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the ith order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.<|reference_end|> | arxiv | @article{tumer1999linear,
title={Linear and Order Statistics Combiners for Pattern Classification},
author={Kagan Tumer and Joydeep Ghosh},
journal={Combining Artificial Neural Networks,Ed. Amanda Sharkey, pp
127-162, Springer Verlag, 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905012},
primaryClass={cs.NE cs.LG}
} | tumer1999linear |
arxiv-676326 | cs/9905013 | Robust Combining of Disparate Classifiers through Order Statistics | <|reference_start|>Robust Combining of Disparate Classifiers through Order Statistics: Integrating the outputs of multiple classifiers via combiners or meta-learners has led to substantial improvements in several difficult pattern recognition problems. In the typical setting investigated till now, each classifier is trained on data taken or resampled from a common data set, or (almost) randomly selected subsets thereof, and thus experiences similar quality of training data. However, in certain situations where data is acquired and analyzed on-line at several geographically distributed locations, the quality of data may vary substantially, leading to large discrepancies in performance of individual classifiers. In this article we introduce and investigate a family of classifiers based on order statistics, for robust handling of such cases. Based on a mathematical modeling of how the decision boundaries are affected by order statistic combiners, we derive expressions for the reductions in error expected when such combiners are used. We show analytically that the selection of the median, the maximum and in general, the $i^{th}$ order statistic improves classification performance. Furthermore, we introduce the trim and spread combiners, both based on linear combinations of the ordered classifier outputs, and show that they are quite beneficial in presence of outliers or uneven classifier performance. Experimental results on several public domain data sets corroborate these findings.<|reference_end|> | arxiv | @article{tumer1999robust,
title={Robust Combining of Disparate Classifiers through Order Statistics},
author={Kagan Tumer and Joydeep Ghosh},
journal={arXiv preprint arXiv:cs/9905013},
year={1999},
number={UT-CVIS-TR-99-001 (The University of Texas)},
archivePrefix={arXiv},
eprint={cs/9905013},
primaryClass={cs.LG cs.CV cs.NE}
} | tumer1999robust |
arxiv-676327 | cs/9905014 | Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition | <|reference_start|>Hierarchical Reinforcement Learning with the MAXQ Value Function Decomposition: This paper presents the MAXQ approach to hierarchical reinforcement learning based on decomposing the target Markov decision process (MDP) into a hierarchy of smaller MDPs and decomposing the value function of the target MDP into an additive combination of the value functions of the smaller MDPs. The paper defines the MAXQ hierarchy, proves formal results on its representational power, and establishes five conditions for the safe use of state abstractions. The paper presents an online model-free learning algorithm, MAXQ-Q, and proves that it converges wih probability 1 to a kind of locally-optimal policy known as a recursively optimal policy, even in the presence of the five kinds of state abstraction. The paper evaluates the MAXQ representation and MAXQ-Q through a series of experiments in three domains and shows experimentally that MAXQ-Q (with state abstractions) converges to a recursively optimal policy much faster than flat Q learning. The fact that MAXQ learns a representation of the value function has an important benefit: it makes it possible to compute and execute an improved, non-hierarchical policy via a procedure similar to the policy improvement step of policy iteration. The paper demonstrates the effectiveness of this non-hierarchical execution experimentally. Finally, the paper concludes with a comparison to related work and a discussion of the design tradeoffs in hierarchical reinforcement learning.<|reference_end|> | arxiv | @article{dietterich1999hierarchical,
title={Hierarchical Reinforcement Learning with the MAXQ Value Function
Decomposition},
author={Thomas G. Dietterich},
journal={arXiv preprint arXiv:cs/9905014},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905014},
primaryClass={cs.LG}
} | dietterich1999hierarchical |
arxiv-676328 | cs/9905015 | State Abstraction in MAXQ Hierarchical Reinforcement Learning | <|reference_start|>State Abstraction in MAXQ Hierarchical Reinforcement Learning: Many researchers have explored methods for hierarchical reinforcement learning (RL) with temporal abstractions, in which abstract actions are defined that can perform many primitive actions before terminating. However, little is known about learning with state abstractions, in which aspects of the state space are ignored. In previous work, we developed the MAXQ method for hierarchical RL. In this paper, we define five conditions under which state abstraction can be combined with the MAXQ value function decomposition. We prove that the MAXQ-Q learning algorithm converges under these conditions and show experimentally that state abstraction is important for the successful application of MAXQ-Q learning.<|reference_end|> | arxiv | @article{dietterich1999state,
title={State Abstraction in MAXQ Hierarchical Reinforcement Learning},
author={Thomas G. Dietterich},
journal={arXiv preprint arXiv:cs/9905015},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905015},
primaryClass={cs.LG}
} | dietterich1999state |
arxiv-676329 | cs/9905016 | Programs with Stringent Performance Objectives Will Often Exhibit Chaotic Behavior | <|reference_start|>Programs with Stringent Performance Objectives Will Often Exhibit Chaotic Behavior: Software for the resolution of certain kind of problems, those that rate high in the Stringent Performance Objectives adjustment factor (IFPUG scheme), can be described using a combination of game theory and autonomous systems. From this description it can be shown that some of those problems exhibit chaotic behavior, an important fact in understanding the functioning of the related software. As a relatively simple example, it is shown that chess exhibits chaotic behavior in its configuration space. This implies that static evaluators in chess programs have intrinsic limitations.<|reference_end|> | arxiv | @article{chaves1999programs,
title={Programs with Stringent Performance Objectives Will Often Exhibit
Chaotic Behavior},
author={M. Chaves},
journal={arXiv preprint arXiv:cs/9905016},
year={1999},
archivePrefix={arXiv},
eprint={cs/9905016},
primaryClass={cs.CE cs.CC}
} | chaves1999programs |
arxiv-676330 | cs/9906001 | On Bounded-Weight Error-Correcting Codes | <|reference_start|>On Bounded-Weight Error-Correcting Codes: This paper computationally obtains optimal bounded-weight, binary, error-correcting codes for a variety of distance bounds and dimensions. We compare the sizes of our codes to the sizes of optimal constant-weight, binary, error-correcting codes, and evaluate the differences.<|reference_end|> | arxiv | @article{bent1999on,
title={On Bounded-Weight Error-Correcting Codes},
author={Russell Bent, Michael Schear, Lane A. Hemaspaandra, Gabriel Istrate},
journal={arXiv preprint arXiv:cs/9906001},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906001},
primaryClass={cs.IT math.IT}
} | bent1999on |
arxiv-676331 | cs/9906002 | The Symbol Grounding Problem | <|reference_start|>The Symbol Grounding Problem: How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes, be grounded in anything but other meaningless symbols? The problem is analogous to trying to learn Chinese from a Chinese/Chinese dictionary alone. A candidate solution is sketched: Symbolic representations must be grounded bottom-up in nonsymbolic representations of two kinds: (1) "iconic representations," which are analogs of the proximal sensory projections of distal objects and events, and (2) "categorical representations," which are learned and innate feature-detectors that pick out the invariant features of object and event categories from their sensory projections. Elementary symbols are the names of these object and event categories, assigned on the basis of their (nonsymbolic) categorical representations. Higher-order (3) "symbolic representations," grounded in these elementary symbols, consist of symbol strings describing category membership relations (e.g., "An X is a Y that is Z").<|reference_end|> | arxiv | @article{harnad1999the,
title={The Symbol Grounding Problem},
author={Stevan Harnad},
journal={Physica D 42: 335-346},
year={1999},
doi={10.1016/0167-2789(90)90087-6},
archivePrefix={arXiv},
eprint={cs/9906002},
primaryClass={cs.AI}
} | harnad1999the |
arxiv-676332 | cs/9906003 | The syntactic processing of particles in Japanese spoken language | <|reference_start|>The syntactic processing of particles in Japanese spoken language: Particles fullfill several distinct central roles in the Japanese language. They can mark arguments as well as adjuncts, can be functional or have semantic funtions. There is, however, no straightforward matching from particles to functions, as, e.g., GA can mark the subject, the object or an adjunct of a sentence. Particles can cooccur. Verbal arguments that could be identified by particles can be eliminated in the Japanese sentence. And finally, in spoken language particles are often omitted. A proper treatment of particles is thus necessary to make an analysis of Japanese sentences possible. Our treatment is based on an empirical investigation of 800 dialogues. We set up a type hierarchy of particles motivated by their subcategorizational and modificational behaviour. This type hierarchy is part of the Japanese syntax in VERBMOBIL.<|reference_end|> | arxiv | @article{siegel1999the,
title={The syntactic processing of particles in Japanese spoken language},
author={Melanie Siegel},
journal={Proceedings of the 13th Pacific Asia Conference on Language,
Information and Computation. 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906003},
primaryClass={cs.CL}
} | siegel1999the |
arxiv-676333 | cs/9906004 | Cascaded Grammatical Relation Assignment | <|reference_start|>Cascaded Grammatical Relation Assignment: In this paper we discuss cascaded Memory-Based grammatical relations assignment. In the first stages of the cascade, we find chunks of several types (NP,VP,ADJP,ADVP,PP) and label them with their adverbial function (e.g. local, temporal). In the last stage, we assign grammatical relations to pairs of chunks. We studied the effect of adding several levels to this cascaded classifier and we found that even the less performing chunkers enhanced the performance of the relation finder.<|reference_end|> | arxiv | @article{buchholz1999cascaded,
title={Cascaded Grammatical Relation Assignment},
author={Sabine Buchholz, Jorn Veenstra, Walter Daelemans},
journal={arXiv preprint arXiv:cs/9906004},
year={1999},
number={ILK-9908},
archivePrefix={arXiv},
eprint={cs/9906004},
primaryClass={cs.CL cs.LG}
} | buchholz1999cascaded |
arxiv-676334 | cs/9906005 | Memory-Based Shallow Parsing | <|reference_start|>Memory-Based Shallow Parsing: We present a memory-based learning (MBL) approach to shallow parsing in which POS tagging, chunking, and identification of syntactic relations are formulated as memory-based modules. The experiments reported in this paper show competitive results, the F-value for the Wall Street Journal (WSJ) treebank is: 93.8% for NP chunking, 94.7% for VP chunking, 77.1% for subject detection and 79.0% for object detection.<|reference_end|> | arxiv | @article{daelemans1999memory-based,
title={Memory-Based Shallow Parsing},
author={Walter Daelemans, Sabine Buchholz, Jorn Veenstra},
journal={arXiv preprint arXiv:cs/9906005},
year={1999},
number={ILK-9907},
archivePrefix={arXiv},
eprint={cs/9906005},
primaryClass={cs.CL cs.LG}
} | daelemans1999memory-based |
arxiv-676335 | cs/9906006 | Learning Efficient Disambiguation | <|reference_start|>Learning Efficient Disambiguation: This dissertation analyses the computational properties of current performance-models of natural language parsing, in particular Data Oriented Parsing (DOP), points out some of their major shortcomings and suggests suitable solutions. It provides proofs that various problems of probabilistic disambiguation are NP-Complete under instances of these performance-models, and it argues that none of these models accounts for attractive efficiency properties of human language processing in limited domains, e.g. that frequent inputs are usually processed faster than infrequent ones. The central hypothesis of this dissertation is that these shortcomings can be eliminated by specializing the performance-models to the limited domains. The dissertation addresses "grammar and model specialization" and presents a new framework, the Ambiguity-Reduction Specialization (ARS) framework, that formulates the necessary and sufficient conditions for successful specialization. The framework is instantiated into specialization algorithms and applied to specializing DOP. Novelties of these learning algorithms are 1) they limit the hypotheses-space to include only "safe" models, 2) are expressed as constrained optimization formulae that minimize the entropy of the training tree-bank given the specialized grammar, under the constraint that the size of the specialized model does not exceed a predefined maximum, and 3) they enable integrating the specialized model with the original one in a complementary manner. The dissertation provides experiments with initial implementations and compares the resulting Specialized DOP (SDOP) models to the original DOP models with encouraging results.<|reference_end|> | arxiv | @article{sima'an1999learning,
title={Learning Efficient Disambiguation},
author={Khalil Sima'an},
journal={arXiv preprint arXiv:cs/9906006},
year={1999},
number={Ph.d. thesis, ILLC Dissertation Series number 1999-02, University of
Amsterdam},
archivePrefix={arXiv},
eprint={cs/9906006},
primaryClass={cs.CL cs.AI}
} | sima'an1999learning |
arxiv-676336 | cs/9906007 | MSO definable string transductions and two-way finite state transducers | <|reference_start|>MSO definable string transductions and two-way finite state transducers: String transductions that are definable in monadic second-order (mso) logic (without the use of parameters) are exactly those realized by deterministic two-way finite state transducers. Nondeterministic mso definable string transductions (i.e., those definable with the use of parameters) correspond to compositions of two nondeterministic two-way finite state transducers that have the finite visit property. Both families of mso definable string transductions are characterized in terms of Hennie machines, i.e., two-way finite state transducers with the finite visit property that are allowed to rewrite their input tape.<|reference_end|> | arxiv | @article{engelfriet1999mso,
title={MSO definable string transductions and two-way finite state transducers},
author={Joost Engelfriet, Hendrik Jan Hoogeboom},
journal={arXiv preprint arXiv:cs/9906007},
year={1999},
number={TR 98-13, LIACS, Leiden University, The Netherlands},
archivePrefix={arXiv},
eprint={cs/9906007},
primaryClass={cs.LO cs.CC}
} | engelfriet1999mso |
arxiv-676337 | cs/9906008 | A Lower Bound on the Average-Case Complexity of Shellsort | <|reference_start|>A Lower Bound on the Average-Case Complexity of Shellsort: We prove a general lower bound on the average-case complexity of Shellsort: the average number of data-movements (and comparisons) made by a $p$-pass Shellsort for any incremental sequence is $\Omega (pn^{1 + 1/p})$ for every $p$. The proof method is an incompressibility argument based on Kolmogorov complexity. Using similar techniques, the average-case complexity of several other sorting algorithms is analyzed.<|reference_end|> | arxiv | @article{jiang1999a,
title={A Lower Bound on the Average-Case Complexity of Shellsort},
author={Tao Jiang (McMaster U.), Ming Li (U. Waterloo), Paul Vitanyi (CWI & U.
Amsterdam)},
journal={T. Jiang, M. Li, and P. Vitanyi, A lower bound on the average-case
complexity of Shellsort, J. Assoc. Comp. Mach., 47:5(2000), 905--91},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906008},
primaryClass={cs.CC cs.DS}
} | jiang1999a |
arxiv-676338 | cs/9906009 | Cascaded Markov Models | <|reference_start|>Cascaded Markov Models: This paper presents a new approach to partial parsing of context-free structures. The approach is based on Markov Models. Each layer of the resulting structure is represented by its own Markov Model, and output of a lower layer is passed as input to the next higher layer. An empirical evaluation of the method yields very good results for NP/PP chunking of German newspaper texts.<|reference_end|> | arxiv | @article{brants1999cascaded,
title={Cascaded Markov Models},
author={Thorsten Brants},
journal={Proceedings of EACL-99, Bergen, Norway},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906009},
primaryClass={cs.CL}
} | brants1999cascaded |
arxiv-676339 | cs/9906010 | Predicate Logic with Definitions | <|reference_start|>Predicate Logic with Definitions: Predicate Logic with Definitions (PLD or D-logic) is a modification of first-order logic intended mostly for practical formalization of mathematics. The main syntactic constructs of D-logic are terms, formulas and definitions. A definition is a definition of variables, a definition of constants, or a composite definition (D-logic has also abbreviation definitions called abbreviations). Definitions can be used inside terms and formulas. This possibility alleviates introducing new quantifier-like names. Composite definitions allow constructing new definitions from existing ones.<|reference_end|> | arxiv | @article{makarov1999predicate,
title={Predicate Logic with Definitions},
author={Victor Makarov},
journal={arXiv preprint arXiv:cs/9906010},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906010},
primaryClass={cs.LO cs.AI}
} | makarov1999predicate |
arxiv-676340 | cs/9906011 | A Newton method without evaluation of nonlinear function values | <|reference_start|>A Newton method without evaluation of nonlinear function values: The present author recently proposed and proved a relationship theorem between nonlinear polynomial equations and the corresponding Jacobian matrix. By using this theorem, this paper derives a Newton iterative formula without requiring the evaluation of nonlinear function values in the solution of nonlinear polynomial-only problems.<|reference_end|> | arxiv | @article{chen1999a,
title={A Newton method without evaluation of nonlinear function values},
author={W. Chen},
journal={arXiv preprint arXiv:cs/9906011},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906011},
primaryClass={cs.CE cs.NA math.NA}
} | chen1999a |
arxiv-676341 | cs/9906012 | The application of special matrix product to differential quadrature solution of geometrically nonlinear bending of orthotropic rectangular plates | <|reference_start|>The application of special matrix product to differential quadrature solution of geometrically nonlinear bending of orthotropic rectangular plates: The Hadamard and SJT product of matrices are two types of special matrix product. The latter was first defined by Chen. In this study, they are applied to the differential quadrature (DQ) solution of geometrically nonlinear bending of isotropic and orthotropic rectangular plates. By using the Hadamard product, the nonlinear formulations are greatly simplified, while the SJT product approach minimizes the effort to evaluate the Jacobian derivative matrix in the Newton-Raphson method for solving the resultant nonlinear formulations. In addition, the coupled nonlinear formulations for the present problems can easily be decoupled by means of the Hadamard and SJT product. Therefore, the size of the simultaneous nonlinear algebraic equations is reduced by two-thirds and the computing effort and storage requirements are alleviated greatly. Two recent approaches applying the multiple boundary conditions are employed in the present DQ nonlinear computations. The solution accuracies are improved obviously in comparison to the previously given by Bert et al. The numerical results and detailed solution procedures are provided to demonstrate the superb efficiency, accuracy and simplicity of the new approaches in applying DQ method for nonlinear computations.<|reference_end|> | arxiv | @article{chen1999the,
title={The application of special matrix product to differential quadrature
solution of geometrically nonlinear bending of orthotropic rectangular plates},
author={W. Chen, C. Shu, W. He},
journal={arXiv preprint arXiv:cs/9906012},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906012},
primaryClass={cs.CE cs.NA math.NA}
} | chen1999the |
arxiv-676342 | cs/9906013 | Combining Inclusion Polymorphism and Parametric Polymorphism | <|reference_start|>Combining Inclusion Polymorphism and Parametric Polymorphism: We show that the question whether a term is typable is decidable for type systems combining inclusion polymorphism with parametric polymorphism provided the type constructors are at most unary. To prove this result we first reduce the typability problem to the problem of solving a system of type inequations. The result is then obtained by showing that the solvability of the resulting system of type inequations is decidable.<|reference_end|> | arxiv | @article{glesner1999combining,
title={Combining Inclusion Polymorphism and Parametric Polymorphism},
author={Sabine Glesner and Karl Stroetmann},
journal={arXiv preprint arXiv:cs/9906013},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906013},
primaryClass={cs.LO cs.PL}
} | glesner1999combining |
arxiv-676343 | cs/9906014 | Evaluation of the NLP Components of the OVIS2 Spoken Dialogue System | <|reference_start|>Evaluation of the NLP Components of the OVIS2 Spoken Dialogue System: The NWO Priority Programme Language and Speech Technology is a 5-year research programme aiming at the development of spoken language information systems. In the Programme, two alternative natural language processing (NLP) modules are developed in parallel: a grammar-based (conventional, rule-based) module and a data-oriented (memory-based, stochastic, DOP) module. In order to compare the NLP modules, a formal evaluation has been carried out three years after the start of the Programme. This paper describes the evaluation procedure and the evaluation results. The grammar-based component performs much better than the data-oriented one in this comparison.<|reference_end|> | arxiv | @article{van zanten1999evaluation,
title={Evaluation of the NLP Components of the OVIS2 Spoken Dialogue System},
author={Gert Veldhuijzen van Zanten and Gosse Bouma and Khalil Sima'an and
Gertjan van Noord and Remko Bonnema},
journal={arXiv preprint arXiv:cs/9906014},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906014},
primaryClass={cs.CL}
} | van zanten1999evaluation |
arxiv-676344 | cs/9906015 | Learning Transformation Rules to Find Grammatical Relations | <|reference_start|>Learning Transformation Rules to Find Grammatical Relations: Grammatical relationships are an important level of natural language processing. We present a trainable approach to find these relationships through transformation sequences and error-driven learning. Our approach finds grammatical relationships between core syntax groups and bypasses much of the parsing phase. On our training and test set, our procedure achieves 63.6% recall and 77.3% precision (f-score = 69.8).<|reference_end|> | arxiv | @article{ferro1999learning,
title={Learning Transformation Rules to Find Grammatical Relations},
author={Lisa Ferro, Marc Vilain and Alexander Yeh},
journal={Computational Natural Language Learning (CoNLL-99), pages 43-52,
June, 1999. Bergen, Norway},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906015},
primaryClass={cs.CL}
} | ferro1999learning |
arxiv-676345 | cs/9906016 | Automatically Selecting Useful Phrases for Dialogue Act Tagging | <|reference_start|>Automatically Selecting Useful Phrases for Dialogue Act Tagging: We present an empirical investigation of various ways to automatically identify phrases in a tagged corpus that are useful for dialogue act tagging. We found that a new method (which measures a phrase's deviation from an optimally-predictive phrase), enhanced with a lexical filtering mechanism, produces significantly better cues than manually-selected cue phrases, the exhaustive set of phrases in a training corpus, and phrases chosen by traditional metrics, like mutual information and information gain.<|reference_end|> | arxiv | @article{samuel1999automatically,
title={Automatically Selecting Useful Phrases for Dialogue Act Tagging},
author={Ken Samuel, Sandra Carberry, and K. Vijay-Shanker},
journal={Samuel, Ken and Carberry, Sandra and Vijay-Shanker, K. 1999.
Automatically Selecting Useful Phrases for Dialogue Act Tagging. In
Proceedings of the Fourth Conference of the Pacific Association for
Computational Linguistics. Waterloo, Ontario, Canada},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906016},
primaryClass={cs.AI cs.LG}
} | samuel1999automatically |
arxiv-676346 | cs/9906017 | Generalization of automatic sequences for numeration systems on a regular language | <|reference_start|>Generalization of automatic sequences for numeration systems on a regular language: Let L be an infinite regular language on a totally ordered alphabet (A,<). Feeding a finite deterministic automaton (with output) with the words of L enumerated lexicographically with respect to < leads to an infinite sequence over the output alphabet of the automaton. This process generalizes the concept of k-automatic sequence for abstract numeration systems on a regular language (instead of systems in base k). Here, I study the first properties of these sequences and their relations with numeration systems.<|reference_end|> | arxiv | @article{rigo1999generalization,
title={Generalization of automatic sequences for numeration systems on a
regular language},
author={Michel Rigo},
journal={Theoret. Comput. Sci. 244 (2000) 271--281},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906017},
primaryClass={cs.CC}
} | rigo1999generalization |
arxiv-676347 | cs/9906018 | Reconstructing Polyatomic Structures from Discrete X-Rays: NP-Completeness Proof for Three Atoms | <|reference_start|>Reconstructing Polyatomic Structures from Discrete X-Rays: NP-Completeness Proof for Three Atoms: We address a discrete tomography problem that arises in the study of the atomic structure of crystal lattices. A polyatomic structure T can be defined as an integer lattice in dimension D>=2, whose points may be occupied by $c$ distinct types of atoms. To ``analyze'' T, we conduct ell measurements that we call_discrete X-rays_. A discrete X-ray in direction xi determines the number of atoms of each type on each line parallel to xi. Given ell such non-parallel X-rays, we wish to reconstruct T. The complexity of the problem for c=1 (one atom type) has been completely determined by Gardner, Gritzmann and Prangenberg, who proved that the problem is NP-complete for any dimension D>=2 and ell>=3 non-parallel X-rays, and that it can be solved in polynomial time otherwise. The NP-completeness result above clearly extends to any c>=2, and therefore when studying the polyatomic case we can assume that ell=2. As shown in another article by the same authors, this problem is also NP-complete for c>=6 atoms, even for dimension D=2 and axis-parallel X-rays. They conjecture that the problem remains NP-complete for c=3,4,5, although, as they point out, the proof idea does not seem to extend to c<=5. We resolve the conjecture by proving that the problem is indeed NP-complete for c>=3 in 2D, even for axis-parallel X-rays. Our construction relies heavily on some structure results for the realizations of 0-1 matrices with given row and column sums.<|reference_end|> | arxiv | @article{durr1999reconstructing,
title={Reconstructing Polyatomic Structures from Discrete X-Rays:
NP-Completeness Proof for Three Atoms},
author={Christoph Durr and Marek Chrobak},
journal={Proceedings of the 23rd International Symposium on Mathematical
Foundations of Computer Science, LNCS vol 1450, 185-193, 1998},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906018},
primaryClass={cs.DS cs.CC}
} | durr1999reconstructing |
arxiv-676348 | cs/9906019 | Resolving Part-of-Speech Ambiguity in the Greek Language Using Learning Techniques | <|reference_start|>Resolving Part-of-Speech Ambiguity in the Greek Language Using Learning Techniques: This article investigates the use of Transformation-Based Error-Driven learning for resolving part-of-speech ambiguity in the Greek language. The aim is not only to study the performance, but also to examine its dependence on different thematic domains. Results are presented here for two different test cases: a corpus on "management succession events" and a general-theme corpus. The two experiments show that the performance of this method does not depend on the thematic domain of the corpus, and its accuracy for the Greek language is around 95%.<|reference_end|> | arxiv | @article{petasis1999resolving,
title={Resolving Part-of-Speech Ambiguity in the Greek Language Using Learning
Techniques},
author={G. Petasis, G. Paliouras, V. Karkaletsis, C. D. Spyropoulos and I.
Androutsopoulos (Software & Knowledge Engineering Lab, Institute of
Informatics & Telecommunications, NCSR Demokritos, Greece)},
journal={In Fakotakis, N. et al. (Eds.), Machine Learning in Human Language
Technology (Proceedings of the ACAI Workshop), pp. 29-34, Chania, Greece,
1999.},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906019},
primaryClass={cs.CL cs.AI}
} | petasis1999resolving |
arxiv-676349 | cs/9906020 | Temporal Meaning Representations in a Natural Language Front-End | <|reference_start|>Temporal Meaning Representations in a Natural Language Front-End: Previous work in the context of natural language querying of temporal databases has established a method to map automatically from a large subset of English time-related questions to suitable expressions of a temporal logic-like language, called TOP. An algorithm to translate from TOP to the TSQL2 temporal database language has also been defined. This paper shows how TOP expressions could be translated into a simpler logic-like language, called BOT. BOT is very close to traditional first-order predicate logic (FOPL), and hence existing methods to manipulate FOPL expressions can be exploited to interface to time-sensitive applications other than TSQL2 databases, maintaining the existing English-to-TOP mapping.<|reference_end|> | arxiv | @article{androutsopoulos1999temporal,
title={Temporal Meaning Representations in a Natural Language Front-End},
author={I. Androutsopoulos (Software & Knowledge Engineering Lab, Institute of
Informatics & Telecommunications, NCSR Demokritos, Greece)},
journal={In Gergatsoulis, M. and Rondogiannis, P. (Eds.), Intensional
Programming II (Proceedings of the 12th International Symposium on Languages
for Intensional Programming, Athens, Greece, 1999), pp. 197-213, World
Scientific, 2000.},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906020},
primaryClass={cs.CL}
} | androutsopoulos1999temporal |
arxiv-676350 | cs/9906021 | Reconstructing hv-Convex Polyominoes from Orthogonal Projections | <|reference_start|>Reconstructing hv-Convex Polyominoes from Orthogonal Projections: Tomography is the area of reconstructing objects from projections. Here we wish to reconstruct a set of cells in a two dimensional grid, given the number of cells in every row and column. The set is required to be an hv-convex polyomino, that is all its cells must be connected and the cells in every row and column must be consecutive. A simple, polynomial algorithm for reconstructing hv-convex polyominoes is provided, which is several orders of magnitudes faster than the best previously known algorithm from Barcucci et al. In addition, the problem of reconstructing a special class of centered hv-convex polyominoes is addressed. (An object is centered if it contains a row whose length equals the total width of the object). It is shown that in this case the reconstruction problem can be solved in linear time.<|reference_end|> | arxiv | @article{durr1999reconstructing,
title={Reconstructing hv-Convex Polyominoes from Orthogonal Projections},
author={Christoph Durr and Marek Chrobak},
journal={Information Processing Letters, 69, 1999, 283-289},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906021},
primaryClass={cs.DS}
} | durr1999reconstructing |
arxiv-676351 | cs/9906022 | Zero-Parity Stabbing Information | <|reference_start|>Zero-Parity Stabbing Information: Everett et al. introduced several varieties of stabbing information for the lines determined by pairs of vertices of a simple polygon P, and established their relationships to vertex visibility and other combinatorial data. In the same spirit, we define the ``zero-parity (ZP) stabbing information'' to be a natural weakening of their ``weak stabbing information,'' retaining only the distinction among {zero, odd, even>0} in the number of polygon edges stabbed. Whereas the weak stabbing information's relation to visibility remains an open problem, we completely settle the analogous questions for zero-parity information, with three results: (1) ZP information is insufficient to distinguish internal from external visibility graph edges; (2) but it does suffice for all polygons that avoid a certain complex substructure; and (3) the natural generalization of ZP information to the continuous case of smooth curves does distinguish internal from external visibility.<|reference_end|> | arxiv | @article{o'rourke1999zero-parity,
title={Zero-Parity Stabbing Information},
author={Joseph O'Rourke and Irena Pashchenko},
journal={Proc. Japan Conf. Discrete Comput. Geom. '98, Dec. 1998, 93--97},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906022},
primaryClass={cs.CG cs.DM}
} | o'rourke1999zero-parity |
arxiv-676352 | cs/9906023 | Computational Geometry Column 35 | <|reference_start|>Computational Geometry Column 35: The subquadratic algorithm of Kapoor for finding shortest paths on a polyhedron is described.<|reference_end|> | arxiv | @article{o'rourke1999computational,
title={Computational Geometry Column 35},
author={Joseph O'Rourke},
journal={SIGACT News, 30(2) Issue #111 (1999) 31-32},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906023},
primaryClass={cs.CG}
} | o'rourke1999computational |
arxiv-676353 | cs/9906024 | A decision procedure for well-formed linear quantum cellular automata | <|reference_start|>A decision procedure for well-formed linear quantum cellular automata: In this paper we introduce a new quantum computation model, the linear quantum cellular automaton. Well-formedness is an essential property for any quantum computing device since it enables us to define the probability of a configuration in an observation as the squared magnitude of its amplitude. We give an efficient algorithm which decides if a linear quantum cellular automaton is well-formed. The complexity of the algorithm is $O(n^2)$ in the algebraic model of computation if the input automaton has continuous neighborhood.<|reference_end|> | arxiv | @article{durr1999a,
title={A decision procedure for well-formed linear quantum cellular automata},
author={Christoph Durr, Huong LeThanh and Miklos Santha},
journal={Random Structures and Algorithms 11, 381-394, 1997},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906024},
primaryClass={cs.DS cs.CC quant-ph}
} | durr1999a |
arxiv-676354 | cs/9906025 | Mapping Multilingual Hierarchies Using Relaxation Labeling | <|reference_start|>Mapping Multilingual Hierarchies Using Relaxation Labeling: This paper explores the automatic construction of a multilingual Lexical Knowledge Base from pre-existing lexical resources. We present a new and robust approach for linking already existing lexical/semantic hierarchies. We used a constraint satisfaction algorithm (relaxation labeling) to select --among all the candidate translations proposed by a bilingual dictionary-- the right English WordNet synset for each sense in a taxonomy automatically derived from a Spanish monolingual dictionary. Although on average, there are 15 possible WordNet connections for each sense in the taxonomy, the method achieves an accuracy over 80%. Finally, we also propose several ways in which this technique could be applied to enrich and improve existing lexical databases.<|reference_end|> | arxiv | @article{daude1999mapping,
title={Mapping Multilingual Hierarchies Using Relaxation Labeling},
author={J. Daude, L. Padro and G. Rigau (TALP Research Center. LSI Dept.
Universitat Politecnica de Catalunya. Barcelona)},
journal={arXiv preprint arXiv:cs/9906025},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906025},
primaryClass={cs.CL}
} | daude1999mapping |
arxiv-676355 | cs/9906026 | Robust Grammatical Analysis for Spoken Dialogue Systems | <|reference_start|>Robust Grammatical Analysis for Spoken Dialogue Systems: We argue that grammatical analysis is a viable alternative to concept spotting for processing spoken input in a practical spoken dialogue system. We discuss the structure of the grammar, and a model for robust parsing which combines linguistic sources of information and statistical sources of information. We discuss test results suggesting that grammatical processing allows fast and accurate processing of spoken input.<|reference_end|> | arxiv | @article{van noord1999robust,
title={Robust Grammatical Analysis for Spoken Dialogue Systems},
author={Gertjan van Noord and Gosse Bouma and Rob Koeling and Mark-Jan
Nederhof},
journal={arXiv preprint arXiv:cs/9906026},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906026},
primaryClass={cs.CL}
} | van noord1999robust |
arxiv-676356 | cs/9906027 | Human-Computer Conversation | <|reference_start|>Human-Computer Conversation: The article surveys a little of the history of the technology, sets out the main current theoretical approaches in brief, and discusses the on-going opposition between theoretical and empirical approaches. It illustrates the situation with some discussion of CONVERSE, a system that won the Loebner prize in 1997 and which displays features of both approaches.<|reference_end|> | arxiv | @article{wilks1999human-computer,
title={Human-Computer Conversation},
author={Yorick Wilks and Roberta Catizone},
journal={arXiv preprint arXiv:cs/9906027},
year={1999},
number={CS-99-04},
archivePrefix={arXiv},
eprint={cs/9906027},
primaryClass={cs.CL cs.HC}
} | wilks1999human-computer |
arxiv-676357 | cs/9906028 | On the Power of Positive Turing Reductions | <|reference_start|>On the Power of Positive Turing Reductions: In the early 1980s, Selman's seminal work on positive Turing reductions showed that positive Turing reduction to NP yields no greater computational power than NP itself. Thus, positive Turing and Turing reducibility to NP differ sharply unless the polynomial hierarchy collapses. We show that the situation is quite different for DP, the next level of the boolean hierarchy. In particular, positive Turing reduction to DP already yields all (and only) sets Turing reducibility to NP. Thus, positive Turing and Turing reducibility to DP yield the same class. Additionally, we show that an even weaker class, P(NP[1]), can be substituted for DP in this context.<|reference_end|> | arxiv | @article{hemaspaandra1999on,
title={On the Power of Positive Turing Reductions},
author={Edith Hemaspaandra},
journal={arXiv preprint arXiv:cs/9906028},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906028},
primaryClass={cs.CC}
} | hemaspaandra1999on |
arxiv-676358 | cs/9906029 | Events in Property Patterns | <|reference_start|>Events in Property Patterns: A pattern-based approach to the presentation, codification and reuse of property specifications for finite-state verification was proposed by Dwyer and his collegues. The patterns enable non-experts to read and write formal specifications for realistic systems and facilitate easy conversion of specifications between formalisms, such as LTL, CTL, QRE. In this paper, we extend the pattern system with events - changes of values of variables in the context of LTL.<|reference_end|> | arxiv | @article{chechik1999events,
title={Events in Property Patterns},
author={M.Chechik, D.Paun},
journal={Lecture notes in Computer Science (Proceedings of 6 Spin'99
Workshop)},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906029},
primaryClass={cs.SE cs.AI cs.CL cs.SC}
} | chechik1999events |
arxiv-676359 | cs/9906030 | SCR3: towards usability of formal methods | <|reference_start|>SCR3: towards usability of formal methods: This paper gives an overview of SCR3 -- a toolset designed to increase the usability of formal methods for software development. Formal requirements are specified in SCR3 in an easy to use and review format, and then used in checking requirements for correctness and in verifying consistency between annotated code and requirements. In this paper we discuss motivations behind this work, describe several tools which are part of SCR3, and illustrate their operation on an example of a Cruise Control system.<|reference_end|> | arxiv | @article{chechik1999scr3:,
title={SCR3: towards usability of formal methods},
author={M. Chechik},
journal={Proceedings of CASCON'98, December 1998, pp. 177-191},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906030},
primaryClass={cs.SE}
} | chechik1999scr3: |
arxiv-676360 | cs/9906031 | Events in Linear-Time Properties | <|reference_start|>Events in Linear-Time Properties: For over a decade, researchers in formal methods tried to create formalisms that permit natural specification of systems and allow mathematical reasoning about their correctness. The availability of fully-automated reasoning tools enables more non-specialists to use formal methods effectively --- their responsibility reduces to just specifying the model and expressing the desired properties. Thus, it is essential that these properties be represented in a language that is easy to use and sufficiently expressive. Linear-time temporal logic is a formalism that has been extensively used by researchers for specifying properties of systems. When such properties are closed under stuttering, i.e. their interpretation is not modified by transitions that leave the system in the same state, verification tools can utilize a partial-order reduction technique to reduce the size of the model and thus analyze larger systems. If LTL formulas do not contain the ``next'' operator, the formulas are closed under stuttering, but the resulting language is not expressive enough to capture many important properties, e.g., properties involving events. Determining if an arbitrary LTL formula is closed under stuttering is hard --- it has been proven to be PSPACE-complete. In this paper we relax the restriction on LTL that guarantees closure under stuttering, introduce the notion of edges in the context of LTL, and provide theorems that enable syntactic reasoning about closure under stuttering of LTL formulas.<|reference_end|> | arxiv | @article{paun1999events,
title={Events in Linear-Time Properties},
author={D. Paun, M. Chechik},
journal={Proceedings of 4th IEEE International Symposium on Requirements
Engineering, June 1999, pp. 123-132},
year={1999},
doi={10.1109/ISRE.1999.777992},
archivePrefix={arXiv},
eprint={cs/9906031},
primaryClass={cs.SE}
} | paun1999events |
arxiv-676361 | cs/9906032 | Formal Modeling in a Commercial Setting: A Case Study | <|reference_start|>Formal Modeling in a Commercial Setting: A Case Study: This paper describes a case study conducted in collaboration with Nortel to demonstrate the feasibility of applying formal modeling techniques to telecommunication systems. A formal description language, SDL, was chosen by our qualitative CASE tool evaluation to model a multimedia-messaging system described by an 80-page natural language specification. Our model was used to identify errors in the software requirements document and to derive test suites, shadowing the existing development process and keeping track of a variety of productivity data.<|reference_end|> | arxiv | @article{wong1999formal,
title={Formal Modeling in a Commercial Setting: A Case Study},
author={A. Wong and M. Chechik},
journal={arXiv preprint arXiv:cs/9906032},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906032},
primaryClass={cs.SE}
} | wong1999formal |
arxiv-676362 | cs/9906033 | Robust Reductions | <|reference_start|>Robust Reductions: We continue the study of robust reductions initiated by Gavalda and Balcazar. In particular, a 1991 paper of Gavalda and Balcazar claimed an optimal separation between the power of robust and nondeterministic strong reductions. Unfortunately, their proof is invalid. We re-establish their theorem. Generalizing robust reductions, we note that robustly strong reductions are built from two restrictions, robust underproductivity and robust overproductivity, both of which have been separately studied before in other contexts. By systematically analyzing the power of these reductions, we explore the extent to which each restriction weakens the power of reductions. We show that one of these reductions yields a new, strong form of the Karp-Lipton Theorem.<|reference_end|> | arxiv | @article{cai1999robust,
title={Robust Reductions},
author={Jin-Yi Cai, Lane A. Hemaspaandra, Gerd Wechsung},
journal={arXiv preprint arXiv:cs/9906033},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906033},
primaryClass={cs.CC}
} | cai1999robust |
arxiv-676363 | cs/9906034 | A Unified Example-Based and Lexicalist Approach to Machine Translation | <|reference_start|>A Unified Example-Based and Lexicalist Approach to Machine Translation: We present an approach to Machine Translation that combines the ideas and methodologies of the Example-Based and Lexicalist theoretical frameworks. The approach has been implemented in a multilingual Machine Translation system.<|reference_end|> | arxiv | @article{turcato1999a,
title={A Unified Example-Based and Lexicalist Approach to Machine Translation},
author={Davide Turcato, Paul McFetridge, Fred Popowich, Janine Toole},
journal={arXiv preprint arXiv:cs/9906034},
year={1999},
archivePrefix={arXiv},
eprint={cs/9906034},
primaryClass={cs.CL}
} | turcato1999a |
arxiv-676364 | cs/9907001 | Setting Parameters by Example | <|reference_start|>Setting Parameters by Example: We introduce a class of "inverse parametric optimization" problems, in which one is given both a parametric optimization problem and a desired optimal solution; the task is to determine parameter values that lead to the given solution. We describe algorithms for solving such problems for minimum spanning trees, shortest paths, and other "optimal subgraph" problems, and discuss applications in multicast routing, vehicle path planning, resource allocation, and board game programming.<|reference_end|> | arxiv | @article{eppstein1999setting,
title={Setting Parameters by Example},
author={David Eppstein},
journal={SIAM J. Computing 32(3):643-653, 2003},
year={1999},
doi={10.1137/S0097539700370084},
archivePrefix={arXiv},
eprint={cs/9907001},
primaryClass={cs.DS cs.CG}
} | eppstein1999setting |
arxiv-676365 | cs/9907002 | The Distribution of Cycle Lengths in Graphical Models for Iterative Decoding | <|reference_start|>The Distribution of Cycle Lengths in Graphical Models for Iterative Decoding: This paper analyzes the distribution of cycle lengths in turbo decoding and low-density parity check (LDPC) graphs. The properties of such cycles are of significant interest in the context of iterative decoding algorithms which are based on belief propagation or message passing. We estimate the probability that there exist no simple cycles of length less than or equal to k at a randomly chosen node in a turbo decoding graph using a combination of counting arguments and independence assumptions. For large block lengths n, this probability is approximately e^{-{2^{k-1}-4}/n}, k>=4. Simulation results validate the accuracy of the various approximations. For example, for turbo codes with a block length of 64000, a randomly chosen node has a less than 1% chance of being on a cycle of length less than or equal to 10, but has a greater than 99.9% chance of being on a cycle of length less than or equal to 20. The effect of the "S-random" permutation is also analyzed and it is shown that while it eliminates short cycles of length k<8, it does not significantly affect the overall distribution of cycle lengths. Similar analyses and simulations are also presented for graphs for LDPC codes. The paper concludes by commenting briefly on how these results may provide insight into the practical success of iterative decoding methods.<|reference_end|> | arxiv | @article{ge1999the,
title={The Distribution of Cycle Lengths in Graphical Models for Iterative
Decoding},
author={Xian-ping Ge, David Eppstein, Padhraic Smyth},
journal={IEEE Trans. Information Theory 47(6):2549-2553, 2001},
year={1999},
doi={10.1109/18.945266},
archivePrefix={arXiv},
eprint={cs/9907002},
primaryClass={cs.DM}
} | ge1999the |
arxiv-676366 | cs/9907003 | Annotation graphs as a framework for multidimensional linguistic data analysis | <|reference_start|>Annotation graphs as a framework for multidimensional linguistic data analysis: In recent work we have presented a formal framework for linguistic annotation based on labeled acyclic digraphs. These `annotation graphs' offer a simple yet powerful method for representing complex annotation structures incorporating hierarchy and overlap. Here, we motivate and illustrate our approach using discourse-level annotations of text and speech data drawn from the CALLHOME, COCONUT, MUC-7, DAMSL and TRAINS annotation schemes. With the help of domain specialists, we have constructed a hybrid multi-level annotation for a fragment of the Boston University Radio Speech Corpus which includes the following levels: segment, word, breath, ToBI, Tilt, Treebank, coreference and named entity. We show how annotation graphs can represent hybrid multi-level structures which derive from a diverse set of file formats. We also show how the approach facilitates substantive comparison of multiple annotations of a single signal based on different theoretical models. The discussion shows how annotation graphs open the door to wide-ranging integration of tools, formats and corpora.<|reference_end|> | arxiv | @article{bird1999annotation,
title={Annotation graphs as a framework for multidimensional linguistic data
analysis},
author={Steven Bird and Mark Liberman},
journal={arXiv preprint arXiv:cs/9907003},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907003},
primaryClass={cs.CL}
} | bird1999annotation |
arxiv-676367 | cs/9907004 | MAP Lexicon is useful for segmentation and word discovery in child-directed speech | <|reference_start|>MAP Lexicon is useful for segmentation and word discovery in child-directed speech: Because of rather fundamental changes to the underlying model proposed in the paper, it has been withdrawn from the archive.<|reference_end|> | arxiv | @article{venkataraman1999map,
title={MAP Lexicon is useful for segmentation and word discovery in
child-directed speech},
author={Anand Venkataraman},
journal={arXiv preprint arXiv:cs/9907004},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907004},
primaryClass={cs.CL cs.LG}
} | venkataraman1999map |
arxiv-676368 | cs/9907005 | Alternative Local Discriminant Bases Using Empirical Expectation and Variance Estimation | <|reference_start|>Alternative Local Discriminant Bases Using Empirical Expectation and Variance Estimation: We propose alternative discriminant measures for selecting the best basis among a large collection of orthonormal bases for classification purposes. A generalization of the Local Discriminant Basis Algorithm of Saito and Coifman is constructed. The success of these new methods is evaluated and compared to earlier methods in experiments.<|reference_end|> | arxiv | @article{fossgaard1999alternative,
title={Alternative Local Discriminant Bases Using Empirical Expectation and
Variance Estimation},
author={Eirik Fossgaard},
journal={arXiv preprint arXiv:cs/9907005},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907005},
primaryClass={cs.NA}
} | fossgaard1999alternative |
arxiv-676369 | cs/9907006 | Representing Text Chunks | <|reference_start|>Representing Text Chunks: Dividing sentences in chunks of words is a useful preprocessing step for parsing, information extraction and information retrieval. (Ramshaw and Marcus, 1995) have introduced a "convenient" data representation for chunking by converting it to a tagging task. In this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks. We will show that the the data representation choice has a minor influence on chunking performance. However, equipped with the most suitable data representation, our memory-based learning chunker was able to improve the best published chunking results for a standard data set.<|reference_end|> | arxiv | @article{sang1999representing,
title={Representing Text Chunks},
author={Erik F. Tjong Kim Sang and Jorn Veenstra},
journal={EACL'99, Bergen},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907006},
primaryClass={cs.CL}
} | sang1999representing |
arxiv-676370 | cs/9907007 | Cross-Language Information Retrieval for Technical Documents | <|reference_start|>Cross-Language Information Retrieval for Technical Documents: This paper proposes a Japanese/English cross-language information retrieval (CLIR) system targeting technical documents. Our system first translates a given query containing technical terms into the target language, and then retrieves documents relevant to the translated query. The translation of technical terms is still problematic in that technical terms are often compound words, and thus new terms can be progressively created simply by combining existing base words. In addition, Japanese often represents loanwords based on its phonogram. Consequently, existing dictionaries find it difficult to achieve sufficient coverage. To counter the first problem, we use a compound word translation method, which uses a bilingual dictionary for base words and collocational statistics to resolve translation ambiguity. For the second problem, we propose a transliteration method, which identifies phonetic equivalents in the target language. We also show the effectiveness of our system using a test collection for CLIR.<|reference_end|> | arxiv | @article{fujii1999cross-language,
title={Cross-Language Information Retrieval for Technical Documents},
author={Atsushi Fujii and Tetsuya Ishikawa},
journal={Proceedings of the Joint ACL SIGDAT Conference on Empirical
Methods in Natural Language Processing and Very Large Corpora, pp.29-37, 1999},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907007},
primaryClass={cs.CL}
} | fujii1999cross-language |
arxiv-676371 | cs/9907008 | Explanation-based Learning for Machine Translation | <|reference_start|>Explanation-based Learning for Machine Translation: In this paper we present an application of explanation-based learning (EBL) in the parsing module of a real-time English-Spanish machine translation system designed to translate closed captions. We discuss the efficiency/coverage trade-offs available in EBL and introduce the techniques we use to increase coverage while maintaining a high level of space and time efficiency. Our performance results indicate that this approach is effective.<|reference_end|> | arxiv | @article{toole1999explanation-based,
title={Explanation-based Learning for Machine Translation},
author={Janine Toole, Fred Popowich, Devlan Nicholson, Davide Turcato, Paul
McFetridge},
journal={arXiv preprint arXiv:cs/9907008},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907008},
primaryClass={cs.CL}
} | toole1999explanation-based |
arxiv-676372 | cs/9907009 | Designing and Mining Multi-Terabyte Astronomy Archives: The Sloan Digital Sky Survey | <|reference_start|>Designing and Mining Multi-Terabyte Astronomy Archives: The Sloan Digital Sky Survey: The next-generation astronomy digital archives will cover most of the universe at fine resolution in many wave-lengths, from X-rays to ultraviolet, optical, and infrared. The archives will be stored at diverse geographical locations. One of the first of these projects, the Sloan Digital Sky Survey (SDSS) will create a 5-wavelength catalog over 10,000 square degrees of the sky (see http://www.sdss.org/). The 200 million objects in the multi-terabyte database will have mostly numerical attributes, defining a space of 100+ dimensions. Points in this space have highly correlated distributions. The archive will enable astronomers to explore the data interactively. Data access will be aided by a multidimensional spatial index and other indices. The data will be partitioned in many ways. Small tag objects consisting of the most popular attributes speed up frequent searches. Splitting the data among multiple servers enables parallel, scalable I/O and applies parallel processing to the data. Hashing techniques allow efficient clustering and pair-wise comparison algorithms that parallelize nicely. Randomly sampled subsets allow debugging otherwise large queries at the desktop. Central servers will operate a data pump that supports sweeping searches that touch most of the data. The anticipated queries require special operators related to angular distances and complex similarity tests of object properties, like shapes, colors, velocity vectors, or temporal behaviors. These issues pose interesting data management challenges.<|reference_end|> | arxiv | @article{szalay1999designing,
title={Designing and Mining Multi-Terabyte Astronomy Archives: The Sloan
Digital Sky Survey},
author={Alexander S. Szalay, Peter Kunszt, Ani Thakar, Jim Gray},
journal={arXiv preprint arXiv:cs/9907009},
year={1999},
number={MS_TR_99_30},
archivePrefix={arXiv},
eprint={cs/9907009},
primaryClass={cs.DB cs.DL}
} | szalay1999designing |
arxiv-676373 | cs/9907010 | Language Identification With Confidence Limits | <|reference_start|>Language Identification With Confidence Limits: A statistical classification algorithm and its application to language identification from noisy input are described. The main innovation is to compute confidence limits on the classification, so that the algorithm terminates when enough evidence to make a clear decision has been made, and so avoiding problems with categories that have similar characteristics. A second application, to genre identification, is briefly examined. The results show that some of the problems of other language identification techniques can be avoided, and illustrate a more important point: that a statistical language process can be used to provide feedback about its own success rate.<|reference_end|> | arxiv | @article{elworthy1999language,
title={Language Identification With Confidence Limits},
author={David Elworthy},
journal={arXiv preprint arXiv:cs/9907010},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907010},
primaryClass={cs.CL}
} | elworthy1999language |
arxiv-676374 | cs/9907011 | Reducing Randomness via Irrational Numbers | <|reference_start|>Reducing Randomness via Irrational Numbers: We propose a general methodology for testing whether a given polynomial with integer coefficients is identically zero. The methodology evaluates the polynomial at efficiently computable approximations of suitable irrational points. In contrast to the classical technique of DeMillo, Lipton, Schwartz, and Zippel, this methodology can decrease the error probability by increasing the precision of the approximations instead of using more random bits. Consequently, randomized algorithms that use the classical technique can generally be improved using the new methodology. To demonstrate the methodology, we discuss two nontrivial applications. The first is to decide whether a graph has a perfect matching in parallel. Our new NC algorithm uses fewer random bits while doing less work than the previously best NC algorithm by Chari, Rohatgi, and Srinivasan. The second application is to test the equality of two multisets of integers. Our new algorithm improves upon the previously best algorithms by Blum and Kannan and can speed up their checking algorithm for sorting programs on a large range of inputs.<|reference_end|> | arxiv | @article{chen1999reducing,
title={Reducing Randomness via Irrational Numbers},
author={Zhi-Zhong Chen and Ming-Yang Kao},
journal={SIAM Journal on Computing, 29(4):1247--1256, 2000},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907011},
primaryClass={cs.DS cs.DM}
} | chen1999reducing |
arxiv-676375 | cs/9907012 | Selective Magic HPSG Parsing | <|reference_start|>Selective Magic HPSG Parsing: We propose a parser for constraint-logic grammars implementing HPSG that combines the advantages of dynamic bottom-up and advanced top-down control. The parser allows the user to apply magic compilation to specific constraints in a grammar which as a result can be processed dynamically in a bottom-up and goal-directed fashion. State of the art top-down processing techniques are used to deal with the remaining constraints. We discuss various aspects concerning the implementation of the parser as part of a grammar development system.<|reference_end|> | arxiv | @article{minnen1999selective,
title={Selective Magic HPSG Parsing},
author={Guido Minnen (University of Sussex)},
journal={Proceedings of EACL99, Bergen, Norway, June 8-11},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907012},
primaryClass={cs.CL}
} | minnen1999selective |
arxiv-676376 | cs/9907013 | Corpus Annotation for Parser Evaluation | <|reference_start|>Corpus Annotation for Parser Evaluation: We describe a recently developed corpus annotation scheme for evaluating parsers that avoids shortcomings of current methods. The scheme encodes grammatical relations between heads and dependents, and has been used to mark up a new public-domain corpus of naturally occurring English text. We show how the corpus can be used to evaluate the accuracy of a robust parser, and relate the corpus to extant resources.<|reference_end|> | arxiv | @article{carroll1999corpus,
title={Corpus Annotation for Parser Evaluation},
author={John Carroll, Guido Minnen (University of Sussex), Ted Briscoe
(Cambridge University)},
journal={Proceedings of the EACL99 workshop on Linguistically Interpreted
Corpora (LINC), Bergen, Norway, June 12},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907013},
primaryClass={cs.CL}
} | carroll1999corpus |
arxiv-676377 | cs/9907014 | No information can be conveyed by certain events: The case of the clever widows of Fornicalia and the Stobon Oracle | <|reference_start|>No information can be conveyed by certain events: The case of the clever widows of Fornicalia and the Stobon Oracle: In this short article, we look at an old logical puzzle, its solution and proof and discuss some interesting aspects concerning its representation in a logic programming language like Prolog. We also discuss an intriguing information theoretic aspect of the puzzle.<|reference_end|> | arxiv | @article{venkataraman1999no,
title={No information can be conveyed by certain events: The case of the clever
widows of Fornicalia and the Stobon Oracle},
author={Anand Venkataraman and Ray Kemp},
journal={arXiv preprint arXiv:cs/9907014},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907014},
primaryClass={cs.LO cs.GL}
} | venkataraman1999no |
arxiv-676378 | cs/9907015 | Linear-Time Approximation Algorithms for Computing Numerical Summation with Provably Small Errors | <|reference_start|>Linear-Time Approximation Algorithms for Computing Numerical Summation with Provably Small Errors: Given a multiset $X=\{x_1,..., x_n\}$ of real numbers, the {\it floating-point set summation} problem asks for $S_n=x_1+...+x_n$. Let $E^*_n$ denote the minimum worst-case error over all possible orderings of evaluating $S_n$. We prove that if $X$ has both positive and negative numbers, it is NP-hard to compute $S_n$ with the worst-case error equal to $E^*_n$. We then give the first known polynomial-time approximation algorithm that has a provably small error for arbitrary $X$. Our algorithm incurs a worst-case error at most $2(\mix)E^*_n$.\footnote{All logarithms $\log$ in this paper are base 2.} After $X$ is sorted, it runs in O(n) time. For the case where $X$ is either all positive or all negative, we give another approximation algorithm with a worst-case error at most $\lceil\log\log n\rceil E^*_n$. Even for unsorted $X$, this algorithm runs in O(n) time. Previously, the best linear-time approximation algorithm had a worst-case error at most $\lceil\log n\rceil E^*_n$, while $E^*_n$ was known to be attainable in $O(n \log n)$ time using Huffman coding.<|reference_end|> | arxiv | @article{kao1999linear-time,
title={Linear-Time Approximation Algorithms for Computing Numerical Summation
with Provably Small Errors},
author={Ming-Yang Kao and Jie Wang},
journal={SIAM Journal on Computing, 29(5):1568--1576, 2000},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907015},
primaryClass={cs.DS cs.NA math.NA}
} | kao1999linear-time |
arxiv-676379 | cs/9907016 | Microsoft TerraServer: A Spatial Data Warehouse | <|reference_start|>Microsoft TerraServer: A Spatial Data Warehouse: The TerraServer stores aerial, satellite, and topographic images of the earth in a SQL database available via the Internet. It is the world's largest online atlas, combining five terabytes of image data from the United States Geological Survey (USGS) and SPIN-2. This report describes the system-redesign based on our experience over the last year. It also reports usage and operations results over the last year -- over 2 billion web hits and over 20 Terabytes of imagry served over the Internet. Internet browsers provide intuitive spatial and text interfaces to the data. Users need no special hardware, software, or knowledge to locate and browse imagery. This paper describes how terabytes of "Internet unfriendly" geo-spatial images were scrubbed and edited into hundreds of millions of "Internet friendly" image tiles and loaded into a SQL data warehouse. Microsoft TerraServer demonstrates that general-purpose relational database technology can manage large scale image repositories, and shows that web browsers can be a good geospatial image presentation system.<|reference_end|> | arxiv | @article{slutz1999microsoft,
title={Microsoft TerraServer: A Spatial Data Warehouse},
author={Tom Barclay Jim Gray Don Slutz},
journal={arXiv preprint arXiv:cs/9907016},
year={1999},
number={Microsoft Research Technical Report MSR-TR-99-29},
archivePrefix={arXiv},
eprint={cs/9907016},
primaryClass={cs.DB cs.DL}
} | slutz1999microsoft |
arxiv-676380 | cs/9907017 | A Bootstrap Approach to Automatically Generating Lexical Transfer Rules | <|reference_start|>A Bootstrap Approach to Automatically Generating Lexical Transfer Rules: We describe a method for automatically generating Lexical Transfer Rules (LTRs) from word equivalences using transfer rule templates. Templates are skeletal LTRs, unspecified for words. New LTRs are created by instantiating a template with words, provided that the words belong to the appropriate lexical categories required by the template. We define two methods for creating an inventory of templates and using them to generate new LTRs. A simpler method consists of extracting a finite set of templates from a sample of hand coded LTRs and directly using them in the generation process. A further method consists of abstracting over the initial finite set of templates to define higher level templates, where bilingual equivalences are defined in terms of correspondences involving phrasal categories. Phrasal templates are then mapped onto sets of lexical templates with the aid of grammars. In this way an infinite set of lexical templates is recursively defined. New LTRs are created by parsing input words, matching a template at the phrasal level and using the corresponding lexical categories to instantiate the lexical template. The definition of an infinite set of templates enables the automatic creation of LTRs for multi-word, non-compositional word equivalences of any cardinality.<|reference_end|> | arxiv | @article{turcato1999a,
title={A Bootstrap Approach to Automatically Generating Lexical Transfer Rules},
author={Davide Turcato, Paul McFetridge, Fred Popowich and Janine Toole},
journal={arXiv preprint arXiv:cs/9907017},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907017},
primaryClass={cs.CL}
} | turcato1999a |
arxiv-676381 | cs/9907018 | Hinged Dissection of Polyominoes and Polyforms | <|reference_start|>Hinged Dissection of Polyominoes and Polyforms: A hinged dissection of a set of polygons S is a collection of polygonal pieces hinged together at vertices that can be folded into any member of S. We present a hinged dissection of all edge-to-edge gluings of n congruent copies of a polygon P that join corresponding edges of P. This construction uses kn pieces, where k is the number of vertices of P. When P is a regular polygon, we show how to reduce the number of pieces to ceiling(k/2)*(n-1). In particular, we consider polyominoes (made up of unit squares), polyiamonds (made up of equilateral triangles), and polyhexes (made up of regular hexagons). We also give a hinged dissection of all polyabolos (made up of right isosceles triangles), which do not fall under the general result mentioned above. Finally, we show that if P can be hinged into Q, then any edge-to-edge gluing of n congruent copies of P can be hinged into any edge-to-edge gluing of n congruent copies of Q.<|reference_end|> | arxiv | @article{demaine1999hinged,
title={Hinged Dissection of Polyominoes and Polyforms},
author={Erik D. Demaine, Martin L. Demaine, David Eppstein, Greg N.
Frederickson, Erich Friedman},
journal={arXiv preprint arXiv:cs/9907018},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907018},
primaryClass={cs.CG cs.DM}
} | demaine1999hinged |
arxiv-676382 | cs/9907019 | A Reasonable C++ Wrappered Java Native Interface | <|reference_start|>A Reasonable C++ Wrappered Java Native Interface: A reasonable C++ Java Native Interface (JNI) technique termed C++ Wrappered JNI (C++WJ) is presented. The technique simplifies current error-prone JNI development by wrappering JNI calls. Provided development is done with the aid of a C++ compiler, C++WJ offers type checking and behind the scenes caching. A tool (jH) patterned on javah automates the creation of C++WJ classes. The paper presents the rationale behind the choices that led to C++WJ. Handling of Java class and interface hierarchy including Java type downcasts is discussed. Efficiency considerations in the C++WJ lead to two flavors of C++ classes: jtypes and Jtypes. A jtype is a lightweight less than full wrapper of a JNI object reference. A Jtype is a heavyweight full wrapper of a JNI object reference.<|reference_end|> | arxiv | @article{bordelon1999a,
title={A Reasonable C++ Wrappered Java Native Interface},
author={Craig Bordelon},
journal={arXiv preprint arXiv:cs/9907019},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907019},
primaryClass={cs.SE}
} | bordelon1999a |
arxiv-676383 | cs/9907020 | Generalized linearization in nonlinear modeling of data | <|reference_start|>Generalized linearization in nonlinear modeling of data: The principal innovative idea in this paper is to transform the original complex nonlinear modeling problem into a combination of linear problem and very simple nonlinear problems. The key step is the generalized linearization of nonlinear terms. This paper only presents the introductory strategy of this methodology. The practical numerical experiments will be provided subsequently.<|reference_end|> | arxiv | @article{chen1999generalized,
title={Generalized linearization in nonlinear modeling of data},
author={W. Chen},
journal={arXiv preprint arXiv:cs/9907020},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907020},
primaryClass={cs.CE cs.NA math.NA}
} | chen1999generalized |
arxiv-676384 | cs/9907021 | Architectural Considerations for Conversational Systems -- The Verbmobil/INTARC Experience | <|reference_start|>Architectural Considerations for Conversational Systems -- The Verbmobil/INTARC Experience: The paper describes the speech to speech translation system INTARC, developed during the first phase of the Verbmobil project. The general design goals of the INTARC system architecture were time synchronous processing as well as incrementality and interactivity as a means to achieve a higher degree of robustness and scalability. Interactivity means that in addition to the bottom-up (in terms of processing levels) data flow the ability to process top-down restrictions considering the same signal segment for all processing levels. The construction of INTARC 2.0, which has been operational since fall 1996, followed an engineering approach focussing on the integration of symbolic (linguistic) and stochastic (recognition) techniques which led to a generalization of the concept of a ``one pass'' beam search.<|reference_end|> | arxiv | @article{goerz1999architectural,
title={Architectural Considerations for Conversational Systems -- The
Verbmobil/INTARC Experience},
author={Guenther Goerz, Joerg Spilker, Volker Strom, Hans Weber},
journal={arXiv preprint arXiv:cs/9907021},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907021},
primaryClass={cs.CL}
} | goerz1999architectural |
arxiv-676385 | cs/9907022 | Weak length induction and slow growing depth boolean circuits | <|reference_start|>Weak length induction and slow growing depth boolean circuits: We define a hierarchy of circuit complexity classes LD^i, whose depth are the inverse of a function in Ackermann hierarchy. Then we introduce extremely weak versions of length induction and construct a bounded arithmetic theory L^i_2 whose provably total functions exactly correspond to functions computable by LD^i circuits. Finally, we prove a non-conservation result between L^i_2 and a weaker theory AC^0CA which corresponds to the class AC^0. Our proof utilizes KPT witnessing theorem.<|reference_end|> | arxiv | @article{kuroda1999weak,
title={Weak length induction and slow growing depth boolean circuits},
author={Satoru Kuroda},
journal={arXiv preprint arXiv:cs/9907022},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907022},
primaryClass={cs.LO}
} | kuroda1999weak |
arxiv-676386 | cs/9907023 | On Deletion in Delaunay Triangulation | <|reference_start|>On Deletion in Delaunay Triangulation: This paper presents how the space of spheres and shelling may be used to delete a point from a $d$-dimensional triangulation efficiently. In dimension two, if k is the degree of the deleted vertex, the complexity is O(k log k), but we notice that this number only applies to low cost operations, while time consuming computations are only done a linear number of times. This algorithm may be viewed as a variation of Heller's algorithm, which is popular in the geographic information system community. Unfortunately, Heller algorithm is false, as explained in this paper.<|reference_end|> | arxiv | @article{devillers1999on,
title={On Deletion in Delaunay Triangulation},
author={Olivier Devillers},
journal={arXiv preprint arXiv:cs/9907023},
year={1999},
number={INRIA Research report 3451},
archivePrefix={arXiv},
eprint={cs/9907023},
primaryClass={cs.CG}
} | devillers1999on |
arxiv-676387 | cs/9907024 | Improved Incremental Randomized Delaunay Triangulation | <|reference_start|>Improved Incremental Randomized Delaunay Triangulation: We propose a new data structure to compute the Delaunay triangulation of a set of points in the plane. It combines good worst case complexity, fast behavior on real data, and small memory occupation. The location structure is organized into several levels. The lowest level just consists of the triangulation, then each level contains the triangulation of a small sample of the levels below. Point location is done by marching in a triangulation to determine the nearest neighbor of the query at that level, then the march restarts from that neighbor at the level below. Using a small sample (3%) allows a small memory occupation; the march and the use of the nearest neighbor to change levels quickly locate the query.<|reference_end|> | arxiv | @article{devillers1999improved,
title={Improved Incremental Randomized Delaunay Triangulation},
author={Olivier Devillers},
journal={arXiv preprint arXiv:cs/9907024},
year={1999},
number={INRIA Research Report 3298},
archivePrefix={arXiv},
eprint={cs/9907024},
primaryClass={cs.CG}
} | devillers1999improved |
arxiv-676388 | cs/9907025 | The union of unit balls has quadratic complexity, even if they all contain the origin | <|reference_start|>The union of unit balls has quadratic complexity, even if they all contain the origin: We provide a lower bound construction showing that the union of unit balls in three-dimensional space has quadratic complexity, even if they all contain the origin. This settles a conjecture of Sharir.<|reference_end|> | arxiv | @article{bronnimann1999the,
title={The union of unit balls has quadratic complexity, even if they all
contain the origin},
author={Herve Bronnimann, Olivier Devillers},
journal={arXiv preprint arXiv:cs/9907025},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907025},
primaryClass={cs.CG}
} | bronnimann1999the |
arxiv-676389 | cs/9907026 | Mixing representation levels: The hybrid approach to automatic text generation | <|reference_start|>Mixing representation levels: The hybrid approach to automatic text generation: Natural language generation systems (NLG) map non-linguistic representations into strings of words through a number of steps using intermediate representations of various levels of abstraction. Template based systems, by contrast, tend to use only one representation level, i.e. fixed strings, which are combined, possibly in a sophisticated way, to generate the final text. In some circumstances, it may be profitable to combine NLG and template based techniques. The issue of combining generation techniques can be seen in more abstract terms as the issue of mixing levels of representation of different degrees of linguistic abstraction. This paper aims at defining a reference architecture for systems using mixed representations. We argue that mixed representations can be used without abandoning a linguistically grounded approach to language generation.<|reference_end|> | arxiv | @article{pianta1999mixing,
title={Mixing representation levels: The hybrid approach to automatic text
generation},
author={Emanuele Pianta, Lucia M. Tovena},
journal={Proceedings of the AISB'99 Workshop on ``Reference Architectures
and Data Standards for NLP'', Edinburgh Scotland, April 1999, 8-13},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907026},
primaryClass={cs.CL cs.AI}
} | pianta1999mixing |
arxiv-676390 | cs/9907027 | The Alma Project, or How First-Order Logic Can Help Us in Imperative Programming | <|reference_start|>The Alma Project, or How First-Order Logic Can Help Us in Imperative Programming: The aim of the Alma project is the design of a strongly typed constraint programming language that combines the advantages of logic and imperative programming. The first stage of the project was the design and implementation of Alma-0, a small programming language that provides a support for declarative programming within the imperative programming framework. It is obtained by extending a subset of Modula-2 by a small number of features inspired by the logic programming paradigm. In this paper we discuss the rationale for the design of Alma-0, the benefits of the resulting hybrid programming framework, and the current work on adding constraint processing capabilities to the language. In particular, we discuss the role of the logical and customary variables, the interaction between the constraint store and the program, and the need for lists.<|reference_end|> | arxiv | @article{apt1999the,
title={The Alma Project, or How First-Order Logic Can Help Us in Imperative
Programming},
author={Krzysztof R. Apt and Andrea Schaerf},
journal={arXiv preprint arXiv:cs/9907027},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907027},
primaryClass={cs.LO cs.PL}
} | apt1999the |
arxiv-676391 | cs/9907028 | Further Results on Arithmetic Filters for Geometric Predicates | <|reference_start|>Further Results on Arithmetic Filters for Geometric Predicates: An efficient technique to solve precision problems consists in using exact computations. For geometric predicates, using systematically expensive exact computations can be avoided by the use of filters. The predicate is first evaluated using rounding computations, and an error estimation gives a certificate of the validity of the result. In this note, we studies the statistical efficiency of filters for cosphericity predicate with an assumption of regular distribution of the points. We prove that the expected value of the polynomial corresponding to the in sphere test is greater than epsilon with probability O(epsilon log 1/epsilon) improving the results of a previous paper by the same authors.<|reference_end|> | arxiv | @article{devillers1999further,
title={Further Results on Arithmetic Filters for Geometric Predicates},
author={Olivier Devillers and Franco P. Preparata},
journal={Comput. Geom. Theory Appl. 1999 13:141-148},
year={1999},
number={INRIA Research report 3528},
archivePrefix={arXiv},
eprint={cs/9907028},
primaryClass={cs.CG}
} | devillers1999further |
arxiv-676392 | cs/9907029 | A Probabilistic Analysis of the Power of Arithmetic Filters | <|reference_start|>A Probabilistic Analysis of the Power of Arithmetic Filters: The assumption of real-number arithmetic, which is at the basis of conventional geometric algorithms, has been seriously challenged in recent years, since digital computers do not exhibit such capability. A geometric predicate usually consists of evaluating the sign of some algebraic expression. In most cases, rounded computations yield a reliable result, but sometimes rounded arithmetic introduces errors which may invalidate the algorithms. The rounded arithmetic may produce an incorrect result only if the exact absolute value of the algebraic expression is smaller than some (small) varepsilon, which represents the largest error that may arise in the evaluation of the expression. The threshold varepsilon depends on the structure of the expression and on the adopted computer arithmetic, assuming that the input operands are error-free. A pair (arithmetic engine,threshold) is an "arithmetic filter". In this paper we develop a general technique for assessing the efficacy of an arithmetic filter. The analysis consists of evaluating both the threshold and the probability of failure of the filter. To exemplify the approach, under the assumption that the input points be chosen randomly in a unit ball or unit cube with uniform density, we analyze the two important predicates "which-side" and "insphere". We show that the probability that the absolute values of the corresponding determinants be no larger than some positive value V, with emphasis on small V, is Theta(V) for the which-side predicate, while for the insphere predicate it is Theta(V^(2/3)) in dimension 1, O(sqrt(V)) in dimension 2, and O(sqrt(V) ln(1/V)) in higher dimensions. Constants are small, and are given in the paper.<|reference_end|> | arxiv | @article{devillers1999a,
title={A Probabilistic Analysis of the Power of Arithmetic Filters},
author={Olivier Devillers and Franco P. Preparata},
journal={Discrete and Computational Geometry, 20:523--547, 1998},
year={1999},
number={INRIA Research report 2971},
archivePrefix={arXiv},
eprint={cs/9907029},
primaryClass={cs.CG}
} | devillers1999a |
arxiv-676393 | cs/9907030 | Algorithms for Coloring Quadtrees | <|reference_start|>Algorithms for Coloring Quadtrees: We describe simple linear time algorithms for coloring the squares of balanced and unbalanced quadtrees so that no two adjacent squares are given the same color. If squares sharing sides are defined as adjacent, we color balanced quadtrees with three colors, and unbalanced quadtrees with four colors; these results are both tight, as some quadtrees require this many colors. If squares sharing corners are defined as adjacent, we color balanced or unbalanced quadtrees with six colors; for some quadtrees, at least five colors are required.<|reference_end|> | arxiv | @article{eppstein1999algorithms,
title={Algorithms for Coloring Quadtrees},
author={David Eppstein, Marshall W. Bern, Brad Hutchings},
journal={Algorithmica 32(1):87-94, 2002},
year={1999},
doi={10.1007/s00453-001-0054-2},
archivePrefix={arXiv},
eprint={cs/9907030},
primaryClass={cs.CG}
} | eppstein1999algorithms |
arxiv-676394 | cs/9907031 | Beta-Skeletons have Unbounded Dilation | <|reference_start|>Beta-Skeletons have Unbounded Dilation: A fractal construction shows that, for any beta>0, the beta-skeleton of a point set can have arbitrarily large dilation. In particular this applies to the Gabriel graph.<|reference_end|> | arxiv | @article{eppstein1999beta-skeletons,
title={Beta-Skeletons have Unbounded Dilation},
author={David Eppstein},
journal={Computational Geometry Theory & Appl. 23:43-52, 2002},
year={1999},
doi={10.1016/S0925-7721(01)00055-4},
archivePrefix={arXiv},
eprint={cs/9907031},
primaryClass={cs.CG math.MG}
} | eppstein1999beta-skeletons |
arxiv-676395 | cs/9907032 | Clausal Temporal Resolution | <|reference_start|>Clausal Temporal Resolution: In this article, we examine how clausal resolution can be applied to a specific, but widely used, non-classical logic, namely discrete linear temporal logic. Thus, we first define a normal form for temporal formulae and show how arbitrary temporal formulae can be translated into the normal form, while preserving satisfiability. We then introduce novel resolution rules that can be applied to formulae in this normal form, provide a range of examples and examine the correctness and complexity of this approach is examined and. This clausal resolution approach. Finally, we describe related work and future developments concerning this work.<|reference_end|> | arxiv | @article{fisher1999clausal,
title={Clausal Temporal Resolution},
author={Michael Fisher (1), Clare Dixon (1), Martin Peim (2) ((1) Department
of Computing and Mathematics, Manchester Metropolitan University, Manchester,
UK, (2) Department of Computer Science, Victoria University of Manchester,
Manchester, UK)},
journal={arXiv preprint arXiv:cs/9907032},
year={1999},
archivePrefix={arXiv},
eprint={cs/9907032},
primaryClass={cs.LO cs.AI}
} | fisher1999clausal |
arxiv-676396 | cs/9907033 | Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets | <|reference_start|>Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets: It is known that for any class C closed under union and intersection, the Boolean closure of C, the Boolean hierarchy over C, and the symmetric difference hierarchy over C all are equal. We prove that these equalities hold for any complexity class closed under intersection; in particular, they thus hold for unambiguous polynomial time (UP). In contrast to the NP case, we prove that the Hausdorff hierarchy and the nested difference hierarchy over UP both fail to capture the Boolean closure of UP in some relativized worlds. Karp and Lipton proved that if nondeterministic polynomial time has sparse Turing-complete sets, then the polynomial hierarchy collapses. We establish the first consequences from the assumption that unambiguous polynomial time has sparse Turing-complete sets: (a) UP is in Low_2, where Low_2 is the second level of the low hierarchy, and (b) each level of the unambiguous polynomial hierarchy is contained one level lower in the promise unambiguous polynomial hierarchy than is otherwise known to be the case.<|reference_end|> | arxiv | @article{hemaspaandra1999unambiguous,
title={Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete
Sets},
author={Lane A. Hemaspaandra and Joerg Rothe},
journal={SIAM Journal on Computing vol. 26, no. 3, pp. 634--653, 1997},
year={1999},
number={earlier version appeared as University of Rochester TR-94-483},
archivePrefix={arXiv},
eprint={cs/9907033},
primaryClass={cs.CC}
} | hemaspaandra1999unambiguous |
arxiv-676397 | cs/9907034 | Polynomial-Time Multi-Selectivity | <|reference_start|>Polynomial-Time Multi-Selectivity: We introduce a generalization of Selman's P-selectivity that yields a more flexible notion of selectivity, called (polynomial-time) multi-selectivity, in which the selector is allowed to operate on multiple input strings. Since our introduction of this class, it has been used to prove the first known (and optimal) lower bounds for generalized selectivity-like classes in terms of EL_2, the second level of the extended low hierarchy. We study the resulting selectivity hierarchy, denoted by SH, which we prove does not collapse. In particular, we study the internal structure and the properties of SH and completely establish, in terms of incomparability and strict inclusion, the relations between our generalized selectivity classes and Ogihara's P-mc (polynomial-time membership-comparable) classes. Although SH is a strictly increasing infinite hierarchy, we show that the core results that hold for the P-selective sets and that prove them structurally simple also hold for SH. In particular, all sets in SH have small circuits; the NP sets in SH are in Low_2, the second level of the low hierarchy within NP; and SAT cannot be in SH unless P = NP. Finally, it is known that P-Sel, the class of P-selective sets, is not closed under union or intersection. We provide an extended selectivity hierarchy that is based on SH and that is large enough to capture those closures of the P-selective sets, and yet, in contrast with the P-mc classes, is refined enough to distinguish them.<|reference_end|> | arxiv | @article{hemaspaandra1999polynomial-time,
title={Polynomial-Time Multi-Selectivity},
author={Lane A. Hemaspaandra, Zhigen Jiang, Joerg Rothe and Osamu Watanabe},
journal={Journal of Universal Computer Science vol. 3, no. 3, pp. 197--229,
1997},
year={1999},
number={earlier version appeared as FSU Jena TR Math/Inf/96/11},
archivePrefix={arXiv},
eprint={cs/9907034},
primaryClass={cs.CC}
} | hemaspaandra1999polynomial-time |
arxiv-676398 | cs/9907035 | Easy Sets and Hard Certificate Schemes | <|reference_start|>Easy Sets and Hard Certificate Schemes: Can easy sets only have easy certificate schemes? In this paper, we study the class of sets that, for all NP certificate schemes (i.e., NP machines), always have easy acceptance certificates (i.e., accepting paths) that can be computed in polynomial time. We also study the class of sets that, for all NP certificate schemes, infinitely often have easy acceptance certificates. In particular, we provide equivalent characterizations of these classes in terms of relative generalized Kolmogorov complexity, showing that they are robust. We also provide structural conditions---regarding immunity and class collapses---that put upper and lower bounds on the sizes of these two classes. Finally, we provide negative results showing that some of our positive claims are optimal with regard to being relativizable. Our negative results are proven using a novel observation: we show that the classical ``wide spacing'' oracle construction technique yields instant non-bi-immunity results. Furthermore, we establish a result that improves upon Baker, Gill, and Solovay's classical result that NP \neq P = NP \cap coNP holds in some relativized world.<|reference_end|> | arxiv | @article{hemaspaandra1999easy,
title={Easy Sets and Hard Certificate Schemes},
author={Lane A. Hemaspaandra, Joerg Rothe and Gerd Wechsung},
journal={Acta Informatica vol. 34, no 11, pp. 859--879, 1997},
year={1999},
number={earlier version appeared as FSU Jena TR Math/95/5},
archivePrefix={arXiv},
eprint={cs/9907035},
primaryClass={cs.CC}
} | hemaspaandra1999easy |
arxiv-676399 | cs/9907036 | Exact Analysis of Dodgson Elections: Lewis Carroll's 1876 Voting System is Complete for Parallel Access to NP | <|reference_start|>Exact Analysis of Dodgson Elections: Lewis Carroll's 1876 Voting System is Complete for Parallel Access to NP: In 1876, Lewis Carroll proposed a voting system in which the winner is the candidate who with the fewest changes in voters' preferences becomes a Condorcet winner---a candidate who beats all other candidates in pairwise majority-rule elections. Bartholdi, Tovey, and Trick provided a lower bound---NP-hardness---on the computational complexity of determining the election winner in Carroll's system. We provide a stronger lower bound and an upper bound that matches our lower bound. In particular, determining the winner in Carroll's system is complete for parallel access to NP, i.e., it is complete for $\thetatwo$, for which it becomes the most natural complete problem known. It follows that determining the winner in Carroll's elections is not NP-complete unless the polynomial hierarchy collapses.<|reference_end|> | arxiv | @article{hemaspaandra1999exact,
title={Exact Analysis of Dodgson Elections: Lewis Carroll's 1876 Voting System
is Complete for Parallel Access to NP},
author={Edith Hemaspaandra, Lane A. Hemaspaandra and Joerg Rothe},
journal={Journal of the ACM vol. 44, no. 6, pp. 806--825, 1997},
year={1999},
number={earlier version appeared as University of Rochester TR-96-640},
archivePrefix={arXiv},
eprint={cs/9907036},
primaryClass={cs.CC}
} | hemaspaandra1999exact |
arxiv-676400 | cs/9907037 | Boolean Operations, Joins, and the Extended Low Hierarchy | <|reference_start|>Boolean Operations, Joins, and the Extended Low Hierarchy: We prove that the join of two sets may actually fall into a lower level of the extended low hierarchy than either of the sets. In particular, there exist sets that are not in the second level of the extended low hierarchy, EL_2, yet their join is in EL_2. That is, in terms of extended lowness, the join operator can lower complexity. Since in a strong intuitive sense the join does not lower complexity, our result suggests that the extended low hierarchy is unnatural as a complexity measure. We also study the closure properties of EL_ and prove that EL_2 is not closed under certain Boolean operations. To this end, we establish the first known (and optimal) EL_2 lower bounds for certain notions generalizing Selman's P-selectivity, which may be regarded as an interesting result in its own right.<|reference_end|> | arxiv | @article{hemaspaandra1999boolean,
title={Boolean Operations, Joins, and the Extended Low Hierarchy},
author={Lane A. Hemaspaandra, Zhigen Jiang, Joerg Rothe and Osamu Watanabe},
journal={Theoretical Computer Science vol. 205, no. 1-2, pp. 317--327, 1998},
year={1999},
number={earlier version appeared as University of Rochester TR-96-627},
archivePrefix={arXiv},
eprint={cs/9907037},
primaryClass={cs.CC}
} | hemaspaandra1999boolean |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.