corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-676401
cs/9907038
A Second Step Towards Complexity-Theoretic Analogs of Rice's Theorem
<|reference_start|>A Second Step Towards Complexity-Theoretic Analogs of Rice's Theorem: Rice's Theorem states that every nontrivial language property of the recursively enumerable sets is undecidable. Borchert and Stephan initiated the search for complexity-theoretic analogs of Rice's Theorem. In particular, they proved that every nontrivial counting property of circuits is UP-hard, and that a number of closely related problems are SPP-hard. The present paper studies whether their UP-hardness result itself can be improved to SPP-hardness. We show that their UP-hardness result cannot be strengthened to SPP-hardness unless unlikely complexity class containments hold. Nonetheless, we prove that every P-constructibly bi-infinite counting property of circuits is SPP-hard. We also raise their general lower bound from unambiguous nondeterminism to constant-ambiguity nondeterminism.<|reference_end|>
arxiv
@article{hemaspaandra1999a, title={A Second Step Towards Complexity-Theoretic Analogs of Rice's Theorem}, author={Lane A. Hemaspaandra and Joerg Rothe}, journal={arXiv preprint arXiv:cs/9907038}, year={1999}, number={earlier version appeared as University of Rochester TR-97-662}, archivePrefix={arXiv}, eprint={cs/9907038}, primaryClass={cs.CC} }
hemaspaandra1999a
arxiv-676402
cs/9907039
Raising NP Lower Bounds to Parallel NP Lower Bounds
<|reference_start|>Raising NP Lower Bounds to Parallel NP Lower Bounds: A decade ago, a beautiful paper by Wagner developed a ``toolkit'' that in certain cases allows one to prove problems hard for parallel access to NP. However, the problems his toolkit applies to most directly are not overly natural. During the past year, problems that previously were known only to be NP-hard or coNP-hard have been shown to be hard even for the class of sets solvable via parallel access to NP. Many of these problems are longstanding and extremely natural, such as the Minimum Equivalent Expression problem (which was the original motivation for creating the polynomial hierarchy), the problem of determining the winner in the election system introduced by Lewis Carroll in 1876, and the problem of determining on which inputs heuristic algorithms perform well. In the present article, we survey this recent progress in raising lower bounds.<|reference_end|>
arxiv
@article{hemaspaandra1999raising, title={Raising NP Lower Bounds to Parallel NP Lower Bounds}, author={Edith Hemaspaandra, Lane A. Hemaspaandra and Joerg Rothe}, journal={SIGACT News vol. 28, no. 2, pp. 2--13, 1997}, year={1999}, number={earlier version appeared as University of Rochester TR-97-658}, archivePrefix={arXiv}, eprint={cs/9907039}, primaryClass={cs.CC} }
hemaspaandra1999raising
arxiv-676403
cs/9907040
Characterizations of the Existence of Partial and Total One-Way Permutations
<|reference_start|>Characterizations of the Existence of Partial and Total One-Way Permutations: In this note, we study the easy certificate classes introduced by Hemaspaandra, Rothe, and Wechsung, with regard to the question of whether or not surjective one-way functions exist. This is an important open question in cryptology. We show that the existence of partial one-way permutations can be characterized by separating P from the class of UP sets that, for all unambiguous polynomial-time Turing machines accepting them, always have easy (i.e., polynomial-time computable) certificates. This extends results of Grollmann and Selman. By Gr\"adel's recent results about one-way functions, this also links statements about easy certificates of NP sets with statements in finite model theory. Similarly, there exist surjective poly-one one-way functions if and only if there is a set L in P such that not all FewP machines accepting L always have easy certificates. We also establish a condition necessary and sufficient for the existence of (total) one-way permutations.<|reference_end|>
arxiv
@article{rothe1999characterizations, title={Characterizations of the Existence of Partial and Total One-Way Permutations}, author={Joerg Rothe and Lane A. Hemaspaandra}, journal={arXiv preprint arXiv:cs/9907040}, year={1999}, archivePrefix={arXiv}, eprint={cs/9907040}, primaryClass={cs.CC cs.CR} }
rothe1999characterizations
arxiv-676404
cs/9907041
Restrictive Acceptance Suffices for Equivalence Problems
<|reference_start|>Restrictive Acceptance Suffices for Equivalence Problems: One way of suggesting that an NP problem may not be NP-complete is to show that it is in the class UP. We suggest an analogous new approach---weaker in strength of evidence but more broadly applicable---to suggesting that concrete~NP problems are not NP-complete. In particular we introduce the class EP, the subclass of NP consisting of those languages accepted by NP machines that when they accept always have a number of accepting paths that is a power of two. Since if any NP-complete set is in EP then all NP sets are in EP, it follows---with whatever degree of strength one believes that EP differs from NP---that membership in EP can be viewed as evidence that a problem is not NP-complete. We show that the negation equivalence problem for OBDDs (ordered binary decision diagrams) and the interchange equivalence problem for 2-dags are in EP. We also show that for boolean negation the equivalence problem is in EP^{NP}, thus tightening the existing NP^{NP} upper bound. We show that FewP, bounded ambiguity polynomial time, is contained in EP, a result that is not known to follow from the previous SPP upper bound. For the three problems and classes just mentioned with regard to EP, no proof of membership/containment in UP is known, and for the problem just mentioned with regard to EP^{NP}, no proof of membership in UP^{NP} is known. Thus, EP is indeed a tool that gives evidence against NP-completeness in natural cases where UP cannot currently be applied.<|reference_end|>
arxiv
@article{borchert1999restrictive, title={Restrictive Acceptance Suffices for Equivalence Problems}, author={Bernd Borchert, Lane A. Hemaspaandra and Joerg Rothe}, journal={arXiv preprint arXiv:cs/9907041}, year={1999}, number={Revises Friedrich-Schiller-Universit\"at Jena Technical Report Math/Inf/96/13}, archivePrefix={arXiv}, eprint={cs/9907041}, primaryClass={cs.CC} }
borchert1999restrictive
arxiv-676405
cs/9907042
Raising Reliability of Web Search Tool Research through Replication and Chaos Theory
<|reference_start|>Raising Reliability of Web Search Tool Research through Replication and Chaos Theory: Because the World Wide Web is a dynamic collection of information, the Web search tools (or "search engines") that index the Web are dynamic. Traditional information retrieval evaluation techniques may not provide reliable results when applied to the Web search tools. This study is the result of ten replications of the classic 1996 Ding and Marchionini Web search tool research. It explores the effects that replication can have on transforming unreliable results from one iteration into replicable and therefore reliable results after multiple iterations.<|reference_end|>
arxiv
@article{nicholson1999raising, title={Raising Reliability of Web Search Tool Research through Replication and Chaos Theory}, author={Scott Nicholson}, journal={arXiv preprint arXiv:cs/9907042}, year={1999}, archivePrefix={arXiv}, eprint={cs/9907042}, primaryClass={cs.IR cs.DL} }
nicholson1999raising
arxiv-676406
cs/9907043
A simple C++ library for manipulating scientific data sets as structured data
<|reference_start|>A simple C++ library for manipulating scientific data sets as structured data: Representing scientific data sets efficiently on external storage usually involves converting them to a byte string representation using specialized reader/writer routines. The resulting storage files are frequently difficult to interpret without these specialized routines as they do not contain information about the logical structure of the data. Avoiding such problems usually involves heavy-weight data format libraries or data base systems. We present a simple C++ library that allows to create and access data files that store structured data. The structure of the data is described by a data type that can be built from elementary data types (integer and floating-point numbers, byte strings) and composite data types (arrays, structures, unions). An abstract data access class presents the data to the application. Different actual data file structures can be implemented under this layer. This method is particularly suited to applications that require complex data structures, e.g. molecular dynamics simulations. Extensions such as late type binding and object persistence are discussed.<|reference_end|>
arxiv
@article{best1999a, title={A simple C++ library for manipulating scientific data sets as structured data}, author={Christoph Best (ZIB, Berlin, and J. v. Neumann Institute, Juelich)}, journal={arXiv preprint arXiv:cs/9907043}, year={1999}, number={TR 98-06 (ZIB Berlin)}, archivePrefix={arXiv}, eprint={cs/9907043}, primaryClass={cs.CE cs.DB} }
best1999a
arxiv-676407
cs/9908001
Detecting Sub-Topic Correspondence through Bipartite Term Clustering
<|reference_start|>Detecting Sub-Topic Correspondence through Bipartite Term Clustering: This paper addresses a novel task of detecting sub-topic correspondence in a pair of text fragments, enhancing common notions of text similarity. This task is addressed by coupling corresponding term subsets through bipartite clustering. The paper presents a cost-based clustering scheme and compares it with a bipartite version of the single-link method, providing illustrating results.<|reference_end|>
arxiv
@article{marx1999detecting, title={Detecting Sub-Topic Correspondence through Bipartite Term Clustering}, author={Zvika Marx (1 and 2), Ido Dagan (1), Eli Shamir (2) ((1) Bar-Ilan University, (2) The Hebrew University of Jerusalem)}, journal={Proceedings of ACL'99 Workshop on Unsupervised Learning in Natural Language Processing, 1999, pp 45-51}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908001}, primaryClass={cs.CL} }
marx1999detecting
arxiv-676408
cs/9908002
After Compilers and Operating Systems : The Third Advance in Application Support
<|reference_start|>After Compilers and Operating Systems : The Third Advance in Application Support: After compilers and operating systems, TSIAs are the third advance in application support. A compiler supports a high level application definition in a programming language. An operating system supports a high level interface to the resources used by an application execution. A Task System and Item Architecture (TSIA) provides an application with a transparent reliable, distributed, heterogeneous, adaptive, dynamic, real-time, interactive, parallel, secure or other execution. In addition to supporting the application execution, a TSIA also supports the application definition. This run-time support for the definition is complementary to the compile-time support of a compiler. For example, this allows a language similar to Fortran or C to deliver features promised by functional computing. While many TSIAs exist, they previously have not been recognized as such and have served only a particular type of application. Existing TSIAs and other projects demonstrate that TSIAs are feasible for most applications. As the next paradigm for application support, the TSIA simplifies and unifies existing computing practice and research. By solving many outstanding problems, the TSIA opens many, many new opportunities for computing.<|reference_end|>
arxiv
@article{burow1999after, title={After Compilers and Operating Systems : The Third Advance in Application Support}, author={Burkhard D. Burow}, journal={arXiv preprint arXiv:cs/9908002}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908002}, primaryClass={cs.PL cs.DC cs.OS} }
burow1999after
arxiv-676409
cs/9908003
Ununfoldable Polyhedra with Convex Faces
<|reference_start|>Ununfoldable Polyhedra with Convex Faces: Unfolding a convex polyhedron into a simple planar polygon is a well-studied problem. In this paper, we study the limits of unfoldability by studying nonconvex polyhedra with the same combinatorial structure as convex polyhedra. In particular, we give two examples of polyhedra, one with 24 convex faces and one with 36 triangular faces, that cannot be unfolded by cutting along edges. We further show that such a polyhedron can indeed be unfolded if cuts are allowed to cross faces. Finally, we prove that ``open'' polyhedra with triangular faces may not be unfoldable no matter how they are cut.<|reference_end|>
arxiv
@article{bern1999ununfoldable, title={Ununfoldable Polyhedra with Convex Faces}, author={Marshall Bern, Erik D. Demaine, David Eppstein, Eric Kuo, Andrea Mantler, Jack Snoeyink}, journal={Computational Geometry: Theory and Applications 24(2):51-62, February 2003}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908003}, primaryClass={cs.CG cs.DM} }
bern1999ununfoldable
arxiv-676410
cs/9908004
Extending the Stable Model Semantics with More Expressive Rules
<|reference_start|>Extending the Stable Model Semantics with More Expressive Rules: The rules associated with propositional logic programs and the stable model semantics are not expressive enough to let one write concise programs. This problem is alleviated by introducing some new types of propositional rules. Together with a decision procedure that has been used as a base for an efficient implementation, the new rules supplant the standard ones in practical applications of the stable model semantics.<|reference_end|>
arxiv
@article{simons1999extending, title={Extending the Stable Model Semantics with More Expressive Rules}, author={Patrik Simons}, journal={arXiv preprint arXiv:cs/9908004}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908004}, primaryClass={cs.LO cs.AI} }
simons1999extending
arxiv-676411
cs/9908005
Polygonal Chains Cannot Lock in 4D
<|reference_start|>Polygonal Chains Cannot Lock in 4D: We prove that, in all dimensions d>=4, every simple open polygonal chain and every tree may be straightened, and every simple closed polygonal chain may be convexified. These reconfigurations can be achieved by algorithms that use polynomial time in the number of vertices, and result in a polynomial number of ``moves.'' These results contrast to those known for d=2, where trees can ``lock,'' and for d=3, where open and closed chains can lock.<|reference_end|>
arxiv
@article{cocan1999polygonal, title={Polygonal Chains Cannot Lock in 4D}, author={Roxana Cocan and Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/9908005}, year={1999}, number={Smith Technical Report 063}, archivePrefix={arXiv}, eprint={cs/9908005}, primaryClass={cs.CG cs.DM} }
cocan1999polygonal
arxiv-676412
cs/9908006
Computational Geometry Column 36
<|reference_start|>Computational Geometry Column 36: Two results in "computational origami" are illustrated.<|reference_end|>
arxiv
@article{o'rourke1999computational, title={Computational Geometry Column 36}, author={Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/9908006}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908006}, primaryClass={cs.CG} }
o'rourke1999computational
arxiv-676413
cs/9908007
Computational Geometry Column 37
<|reference_start|>Computational Geometry Column 37: Open problems from the 15th Annual ACM Symposium on Computational Geometry.<|reference_end|>
arxiv
@article{demaine1999computational, title={Computational Geometry Column 37}, author={Erik D. Demaine and Joseph O'Rourke}, journal={arXiv preprint arXiv:cs/9908007}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908007}, primaryClass={cs.CG} }
demaine1999computational
arxiv-676414
cs/9908008
Secure Multicast in a WAN
<|reference_start|>Secure Multicast in a WAN: A secure reliable multicast protocol enables a process to send a message to a group of recipients such that all correct destinations receive the same message, despite the malicious efforts of fewer than a third of the total number of processes, including the sender. This has been sh own to be a useful tool in building secure distributed services, albeit with a cost that typically grows linearly with the size of the system. For very large networks, for which this is prohibitive, we present two approaches for reducing the cost: First, we show a protocol whose cost is on the order of the number of tolerated failures. Secondly, we show how relaxing the consistency requirement to a probabilistic guarantee can reduce the associated cost, effectively to a constant.<|reference_end|>
arxiv
@article{malkhi1999secure, title={Secure Multicast in a WAN}, author={Dahlia Malkhi, Michael Merritt and Ohad Rodeh}, journal={arXiv preprint arXiv:cs/9908008}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908008}, primaryClass={cs.CR cs.DC} }
malkhi1999secure
arxiv-676415
cs/9908009
Secure Execution of Java Applets using a Remote Playground
<|reference_start|>Secure Execution of Java Applets using a Remote Playground: Mobile code presents a number of threats to machines that execute it. We introduce an approach for protecting machines and the resources they hold from mobile code, and describe a system based on our approach for protecting host machines from Java 1.1 applets. In our approach, each Java applet downloaded to the protected domain is rerouted to a dedicated machine (or set of machines), the {\em playground}, at which it is executed. Prior to execution the applet is transformed to use the downloading user's web browser as a graphics terminal for its input and output, and so the user has the illusion that the applet is running on her own machine. In reality, however, mobile code runs only in the sanitized environment of the playground, where user files cannot be mounted and from which only limited network connections are accepted by machines in the protected domain. Our playground thus provides a second level of defense against mobile code that circumvents language-based defenses. The paper presents the design and implementation of a playground for Java 1.1 applets, and discusses extensions of it for other forms of mobile code including Java 1.2.<|reference_end|>
arxiv
@article{malkhi1999secure, title={Secure Execution of Java Applets using a Remote Playground}, author={Dahlia Malkhi and Michael Reiter}, journal={arXiv preprint arXiv:cs/9908009}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908009}, primaryClass={cs.CR cs.NI} }
malkhi1999secure
arxiv-676416
cs/9908010
On Propagating Updates in a Byzantine Environment
<|reference_start|>On Propagating Updates in a Byzantine Environment: We study how to efficiently diffuse updates to a large distributed system of data replicas, some of which may exhibit arbitrary (Byzantine) failures. We assume that strictly fewer than $t$ replicas fail, and that each update is initially received by at least $t$ correct replicas. The goal is to diffuse each update to all correct replicas while ensuring that correct replicas accept no updates generated spuriously by faulty replicas. To achieve reliable diffusion, each correct replica accepts an update only after receiving it from at least $t$ others. We provide the first analysis of epidemic-style protocols for such environments. This analysis is fundamentally different from known analyses for the benign case due to our treatment of fully Byzantine failures---which, among other things, precludes the use of digital signatures for authenticating forwarded updates. We propose two epidemic-style diffusion algorithms and two measures that characterize the efficiency of diffusion algorithms in general. We characterize both of our algorithms according to these measures, and also prove lower bounds with regards to these measures that show that our algorithms are close to optimal.<|reference_end|>
arxiv
@article{malkhi1999on, title={On Propagating Updates in a Byzantine Environment}, author={Dahlia Malkhi, Yishay Mansour and Michael Reiter}, journal={arXiv preprint arXiv:cs/9908010}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908010}, primaryClass={cs.DC cs.CR} }
malkhi1999on
arxiv-676417
cs/9908011
The Load and Availability of Byzantine Quorum Systems
<|reference_start|>The Load and Availability of Byzantine Quorum Systems: Replicated services accessed via {\em quorums} enable each access to be performed at only a subset (quorum) of the servers, and achieve consistency across accesses by requiring any two quorums to intersect. Recently, $b$-masking quorum systems, whose intersections contain at least $2b+1$ servers, have been proposed to construct replicated services tolerant of $b$ arbitrary (Byzantine) server failures. In this paper we consider a hybrid fault model allowing benign failures in addition to the Byzantine ones. We present four novel constructions for $b$-masking quorum systems in this model, each of which has optimal {\em load} (the probability of access of the busiest server) or optimal availability (probability of some quorum surviving failures). To show optimality we also prove lower bounds on the load and availability of any $b$-masking quorum system in this model.<|reference_end|>
arxiv
@article{malkhi1999the, title={The Load and Availability of Byzantine Quorum Systems}, author={Dahlia Malkhi, Michael Reiter and Avishai Wool}, journal={arXiv preprint arXiv:cs/9908011}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908011}, primaryClass={cs.DC cs.CR} }
malkhi1999the
arxiv-676418
cs/9908012
Safe Deals Between Strangers
<|reference_start|>Safe Deals Between Strangers: E-business, information serving, and ubiquitous computing will create heavy request traffic from strangers or even incognitos. Such requests must be managed automatically. Two ways of doing this are well known: giving every incognito consumer the same treatment, and rendering service in return for money. However, different behavior will be often wanted, e.g., for a university library with different access policies for undergraduates, graduate students, faculty, alumni, citizens of the same state, and everyone else. For a data or process server contacted by client machines on behalf of users not previously known, we show how to provide reliable automatic access administration conforming to service agreements. Implementations scale well from very small collections of consumers and producers to immense client/server networks. Servers can deliver information, effect state changes, and control external equipment. Consumer privacy is easily addressed by the same protocol. We support consumer privacy, but allow servers to deny their resources to incognitos. A protocol variant even protects against statistical attacks by consortia of service organizations. One e-commerce application would put the consumer's tokens on a smart card whose readers are in vending kiosks. In e-business we can simplify supply chain administration. Our method can also be used in sensitive networks without introducing new security loopholes.<|reference_end|>
arxiv
@article{gladney1999safe, title={Safe Deals Between Strangers}, author={H.M. Gladney}, journal={arXiv preprint arXiv:cs/9908012}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908012}, primaryClass={cs.CR cs.DL} }
gladney1999safe
arxiv-676419
cs/9908013
Collective Intelligence for Control of Distributed Dynamical Systems
<|reference_start|>Collective Intelligence for Control of Distributed Dynamical Systems: We consider the El Farol bar problem, also known as the minority game (W. B. Arthur, ``The American Economic Review'', 84(2): 406--411 (1994), D. Challet and Y.C. Zhang, ``Physica A'', 256:514 (1998)). We view it as an instance of the general problem of how to configure the nodal elements of a distributed dynamical system so that they do not ``work at cross purposes'', in that their collective dynamics avoids frustration and thereby achieves a provided global goal. We summarize a mathematical theory for such configuration applicable when (as in the bar problem) the global goal can be expressed as minimizing a global energy function and the nodes can be expressed as minimizers of local free energy functions. We show that a system designed with that theory performs nearly optimally for the bar problem.<|reference_end|>
arxiv
@article{wolpert1999collective, title={Collective Intelligence for Control of Distributed Dynamical Systems}, author={David H. Wolpert, Kevin R. Wheeler and Kagan Tumer}, journal={arXiv preprint arXiv:cs/9908013}, year={1999}, doi={10.1209/epl/i2000-00208-x}, number={NASA-ARC-IC-99-44}, archivePrefix={arXiv}, eprint={cs/9908013}, primaryClass={cs.LG adap-org cond-mat cs.AI cs.DC cs.MA nlin.AO} }
wolpert1999collective
arxiv-676420
cs/9908014
An Introduction to Collective Intelligence
<|reference_start|>An Introduction to Collective Intelligence: This paper surveys the emerging science of how to design a ``COllective INtelligence'' (COIN). A COIN is a large multi-agent system where: (i) There is little to no centralized communication or control; and (ii) There is a provided world utility function that rates the possible histories of the full system. In particular, we are interested in COINs in which each agent runs a reinforcement learning (RL) algorithm. Rather than use a conventional modeling approach (e.g., model the system dynamics, and hand-tune agents to cooperate), we aim to solve the COIN design problem implicitly, via the ``adaptive'' character of the RL algorithms of each of the agents. This approach introduces an entirely new, profound design problem: Assuming the RL algorithms are able to achieve high rewards, what reward functions for the individual agents will, when pursued by those agents, result in high world utility? In other words, what reward functions will best ensure that we do not have phenomena like the tragedy of the commons, Braess's paradox, or the liquidity trap? Although still very young, research specifically concentrating on the COIN design problem has already resulted in successes in artificial domains, in particular in packet-routing, the leader-follower problem, and in variants of Arthur's El Farol bar problem. It is expected that as it matures and draws upon other disciplines related to COINs, this research will greatly expand the range of tasks addressable by human engineers. Moreover, in addition to drawing on them, such a fully developed scie nce of COIN design may provide much insight into other already established scientific fields, such as economics, game theory, and population biology.<|reference_end|>
arxiv
@article{wolpert1999an, title={An Introduction to Collective Intelligence}, author={David H. Wolpert and Kagan Tumer}, journal={arXiv preprint arXiv:cs/9908014}, year={1999}, number={NASA-ARC-IC-99-63}, archivePrefix={arXiv}, eprint={cs/9908014}, primaryClass={cs.LG adap-org cond-mat cs.DC cs.MA nlin.AO} }
wolpert1999an
arxiv-676421
cs/9908015
Representing Scholarly Claims in Internet Digital Libraries: A Knowledge Modelling Approach
<|reference_start|>Representing Scholarly Claims in Internet Digital Libraries: A Knowledge Modelling Approach: This paper is concerned with tracking and interpreting scholarly documents in distributed research communities. We argue that current approaches to document description, and current technological infrastructures particularly over the World Wide Web, provide poor support for these tasks. We describe the design of a digital library server which will enable authors to submit a summary of the contributions they claim their documents makes, and its relations to the literature. We describe a knowledge-based Web environment to support the emergence of such a community-constructed semantic hypertext, and the services it could provide to assist the interpretation of an idea or document in the context of its literature. The discussion considers in detail how the approach addresses usability issues associated with knowledge structuring environments.<|reference_end|>
arxiv
@article{shum1999representing, title={Representing Scholarly Claims in Internet Digital Libraries: A Knowledge Modelling Approach}, author={Simon Buckingham Shum, Enrico Motta and John Domingue}, journal={arXiv preprint arXiv:cs/9908015}, year={1999}, number={KMI-TR-80}, archivePrefix={arXiv}, eprint={cs/9908015}, primaryClass={cs.DL cs.AI cs.HC cs.IR} }
shum1999representing
arxiv-676422
cs/9908016
Quadrilateral Meshing by Circle Packing
<|reference_start|>Quadrilateral Meshing by Circle Packing: We use circle-packing methods to generate quadrilateral meshes for polygonal domains, with guaranteed bounds both on the quality and the number of elements. We show that these methods can generate meshes of several types: (1) the elements form the cells of a Voronoi diagram, (2) all elements have two opposite right angles, (3) all elements are kites, or (4) all angles are at most 120 degrees. In each case the total number of elements is O(n), where n is the number of input vertices.<|reference_end|>
arxiv
@article{bern1999quadrilateral, title={Quadrilateral Meshing by Circle Packing}, author={Marshall Bern and David Eppstein}, journal={Int. J. Comp. Geom. & Appl. 10(4):347-360, Aug. 2000}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908016}, primaryClass={cs.CG} }
bern1999quadrilateral
arxiv-676423
cs/9908017
A Differential Invariant for Zooming
<|reference_start|>A Differential Invariant for Zooming: This paper presents an invariant under scaling and linear brightness change. The invariant is based on differentials and therefore is a local feature. Rotationally invariant 2-d differential Gaussian operators up to third order are proposed for the implementation of the invariant. The performance is analyzed by simulating a camera zoom-out.<|reference_end|>
arxiv
@article{siebert1999a, title={A Differential Invariant for Zooming}, author={Andreas Siebert}, journal={Proceedings 1999 International Conference on Image Processing, Kobe, 25-28 October 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908017}, primaryClass={cs.CV} }
siebert1999a
arxiv-676424
cs/9908018
Construction of regular languages and recognizability of polynomials
<|reference_start|>Construction of regular languages and recognizability of polynomials: A generalization of numeration system in which the set N of the natural numbers is recognizable by finite automata can be obtained by describing a lexicographically ordered infinite regular language. Here we show that if P belonging to Q[x] is a polynomial such that P(N) is a subset of N then we can construct a numeration system in which the set of representations of P(N) is regular. The main issue in this construction is to setup a regular language with a density function equals to P(n+1)-P(n) for n large enough.<|reference_end|>
arxiv
@article{rigo1999construction, title={Construction of regular languages and recognizability of polynomials}, author={Michel Rigo}, journal={arXiv preprint arXiv:cs/9908018}, year={1999}, archivePrefix={arXiv}, eprint={cs/9908018}, primaryClass={cs.CC} }
rigo1999construction
arxiv-676425
cs/9909001
Emerging Challenges in Computational Topology
<|reference_start|>Emerging Challenges in Computational Topology: Here we present the results of the NSF-funded Workshop on Computational Topology, which met on June 11 and 12 in Miami Beach, Florida. This report identifies important problems involving both computation and topology.<|reference_end|>
arxiv
@article{bern1999emerging, title={Emerging Challenges in Computational Topology}, author={Marshall Bern, David Eppstein, Pankaj K. Agarwal, Nina Amenta, Paul Chew, Tamal Dey, David P. Dobkin, Herbert Edelsbrunner, Cindy Grimm, Leonidas J. Guibas, John Harer, Joel Hass, Andrew Hicks, Carroll K. Johnson, Gilad Lerman, David Letscher, Paul Plassmann, Eric Sedgwick, Jack Snoeyink, Jeff Weeks, Chee Yap, and Denis Zorin}, journal={arXiv preprint arXiv:cs/9909001}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909001}, primaryClass={cs.CG math.GT} }
bern1999emerging
arxiv-676426
cs/9909002
Semantic robust parsing for noun extraction from natural language queries
<|reference_start|>Semantic robust parsing for noun extraction from natural language queries: This paper describes how robust parsing techniques can be fruitful applied for building a query generation module which is part of a pipelined NLP architecture aimed at process natural language queries in a restricted domain. We want to show that semantic robustness represents a key issue in those NLP systems where it is more likely to have partial and ill-formed utterances due to various factors (e.g. noisy environments, low quality of speech recognition modules, etc...) and where it is necessary to succeed, even if partially, in extracting some meaningful information.<|reference_end|>
arxiv
@article{ballim1999semantic, title={Semantic robust parsing for noun extraction from natural language queries}, author={Afzal Ballim and Vincenzo Pallotta}, journal={Proceedings of WPDI'99 (Workshop on Procedures in Discourse Interpretation),1999, Iasi - Romania}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909002}, primaryClass={cs.CL} }
ballim1999semantic
arxiv-676427
cs/9909003
Iterative Deepening Branch and Bound
<|reference_start|>Iterative Deepening Branch and Bound: In tree search problem the best-first search algorithm needs too much of space . To remove such drawbacks of these algorithms the IDA* was developed which is both space and time cost efficient. But again IDA* can give an optimal solution for real valued problems like Flow shop scheduling, Travelling Salesman and 0/1 Knapsack due to their real valued cost estimates. Thus further modifications are done on it and the Iterative Deepening Branch and Bound Search Algorithms is developed which meets the requirements. We have tried using this algorithm for the Flow Shop Scheduling Problem and have found that it is quite effective.<|reference_end|>
arxiv
@article{mohanty1999iterative, title={Iterative Deepening Branch and Bound}, author={S. Mohanty (1) and R.N. Behera (2) ((1) Department of Computer Science and Application Utkal University, Bhubaneswar, India, (2) National Informatics Centre, Puri, India)}, journal={arXiv preprint arXiv:cs/9909003}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909003}, primaryClass={cs.AI} }
mohanty1999iterative
arxiv-676428
cs/9909004
Convex Tours of Bounded Curvature
<|reference_start|>Convex Tours of Bounded Curvature: We consider the motion planning problem for a point constrained to move along a smooth closed convex path of bounded curvature. The workspace of the moving point is bounded by a convex polygon with m vertices, containing an obstacle in a form of a simple polygon with $n$ vertices. We present an O(m+n) time algorithm finding the path, going around the obstacle, whose curvature is the smallest possible.<|reference_end|>
arxiv
@article{boissonnat1999convex, title={Convex Tours of Bounded Curvature}, author={Jean-Daniel Boissonnat, Jurek Czyzowicz, Olivier Devillers, Jean-Marc Robert, Mariette Yvinec}, journal={arXiv preprint arXiv:cs/9909004}, year={1999}, number={INRIA Research report 2375}, archivePrefix={arXiv}, eprint={cs/9909004}, primaryClass={cs.CG} }
boissonnat1999convex
arxiv-676429
cs/9909005
Computing largest circles separating two sets of segments
<|reference_start|>Computing largest circles separating two sets of segments: A circle $C$ separates two planar sets if it encloses one of the sets and its open interior disk does not meet the other set. A separating circle is a largest one if it cannot be locally increased while still separating the two given sets. An Theta(n log n) optimal algorithm is proposed to find all largest circles separating two given sets of line segments when line segments are allowed to meet only at their endpoints. In the general case, when line segments may intersect $\Omega(n^2)$ times, our algorithm can be adapted to work in O(n alpha(n) log n) time and O(n \alpha(n)) space, where alpha(n) represents the extremely slowly growing inverse of the Ackermann function.<|reference_end|>
arxiv
@article{boissonnat1999computing, title={Computing largest circles separating two sets of segments}, author={Jean-Daniel Boissonnat, Jurek Czyzowicz, Olivier Devillers, Jorge Urrutia, Mariette Yvinec}, journal={arXiv preprint arXiv:cs/9909005}, year={1999}, number={INRIA Research report 2705}, archivePrefix={arXiv}, eprint={cs/9909005}, primaryClass={cs.CG} }
boissonnat1999computing
arxiv-676430
cs/9909006
Motion Planning of Legged Robots
<|reference_start|>Motion Planning of Legged Robots: We study the problem of computing the free space F of a simple legged robot called the spider robot. The body of this robot is a single point and the legs are attached to the body. The robot is subject to two constraints: each leg has a maximal extension R (accessibility constraint) and the body of the robot must lie above the convex hull of its feet (stability constraint). Moreover, the robot can only put its feet on some regions, called the foothold regions. The free space F is the set of positions of the body of the robot such that there exists a set of accessible footholds for which the robot is stable. We present an efficient algorithm that computes F in O(n2 log n) time using O(n2 alpha(n)) space for n discrete point footholds where alpha(n) is an extremely slowly growing function (alpha(n) <= 3 for any practical value of n). We also present an algorithm for computing F when the foothold regions are pairwise disjoint polygons with n edges in total. This algorithm computes F in O(n2 alpha8(n) log n) time using O(n2 alpha8(n)) space (alpha8(n) is also an extremely slowly growing function). These results are close to optimal since Omega(n2) is a lower bound for the size of F.<|reference_end|>
arxiv
@article{boissonnat1999motion, title={Motion Planning of Legged Robots}, author={Jean-Daniel Boissonnat, Olivier Devillers, Sylvain Lazard}, journal={arXiv preprint arXiv:cs/9909006}, year={1999}, doi={10.1137/S0097539797326289}, number={INRIA Research report 3214}, archivePrefix={arXiv}, eprint={cs/9909006}, primaryClass={cs.CG} }
boissonnat1999motion
arxiv-676431
cs/9909007
Circular Separability of Polygons
<|reference_start|>Circular Separability of Polygons: Two planar sets are circularly separable if there exists a circle enclosing one of the sets and whose open interior disk does not intersect the other set. This paper studies two problems related to circular separability. A linear-time algorithm is proposed to decide if two polygons are circularly separable. The algorithm outputs the smallest separating circle. The second problem asks for the largest circle included in a preprocessed, convex polygon, under some point and/or line constraints. The resulting circle must contain the query points and it must lie in the halfplanes delimited by the query lines.<|reference_end|>
arxiv
@article{boissonnat1999circular, title={Circular Separability of Polygons}, author={Jean-Daniel Boissonnat, Jurek Czyzowicz, Olivier Devillers, Mariette Yvinec}, journal={arXiv preprint arXiv:cs/9909007}, year={1999}, number={INRIA Research report 2406}, archivePrefix={arXiv}, eprint={cs/9909007}, primaryClass={cs.CG} }
boissonnat1999circular
arxiv-676432
cs/9909008
Not Just a Matter of Time: Field Differences and the Shaping of Electronic Media in Supporting Scientific Communication
<|reference_start|>Not Just a Matter of Time: Field Differences and the Shaping of Electronic Media in Supporting Scientific Communication: The shift towards the use of electronic media in scholarly communication appears to be an inescapable imperative. However, these shifts are uneven, both with respect to field and with respect to the form of communication. Different scientific fields have developed and use distinctly different communicative forums, both in the paper and electronic arenas, and these forums play different communicative roles within the field. One common claim is that we are in the early stages of an electronic revolution, that it is only a matter of time before other fields catch up with the early adopters, and that all fields converge on a stable set of electronic forums. A social shaping of technology (SST) perspective helps us to identify important social forces centered around disciplinary constructions of trust and of legitimate communication that pull against convergence. This analysis concludes that communicative plurality and communicative heterogeneity are durable features of the scholarly landscape, and that we are likely to see field differences in the use of and meaning ascribed to communications forums persist, even as overall use of electronic communications technologies both in science and in society as a whole increases.<|reference_end|>
arxiv
@article{kling1999not, title={Not Just a Matter of Time: Field Differences and the Shaping of Electronic Media in Supporting Scientific Communication}, author={Rob Kling, Geoffrey McKim}, journal={arXiv preprint arXiv:cs/9909008}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909008}, primaryClass={cs.CY} }
kling1999not
arxiv-676433
cs/9909009
The Rough Guide to Constraint Propagation
<|reference_start|>The Rough Guide to Constraint Propagation: We provide here a simple, yet very general framework that allows us to explain several constraint propagation algorithms in a systematic way. In particular, using the notions commutativity and semi-commutativity, we show how the well-known AC-3, PC-2, DAC and DPC algorithms are instances of a single generic algorithm. The work reported here extends and simplifies that of Apt, cs.AI/9811024.<|reference_end|>
arxiv
@article{apt1999the, title={The Rough Guide to Constraint Propagation}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/9909009}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909009}, primaryClass={cs.AI cs.PL} }
apt1999the
arxiv-676434
cs/9909010
Automatic Generation of Constraint Propagation Algorithms for Small Finite Domains
<|reference_start|>Automatic Generation of Constraint Propagation Algorithms for Small Finite Domains: We study here constraint satisfaction problems that are based on predefined, explicitly given finite constraints. To solve them we propose a notion of rule consistency that can be expressed in terms of rules derived from the explicit representation of the initial constraints. This notion of local consistency is weaker than arc consistency for constraints of arbitrary arity but coincides with it when all domains are unary or binary. For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. By generalizing the format of the rules we obtain a characterization of arc consistency in terms of so-called inclusion rules. The advantage of rule consistency and this rule based characterization of the arc consistency is that the algorithms that enforce both notions can be automatically generated, as CHR rules. So these algorithms could be integrated into constraint logic programming systems such as Eclipse. We illustrate the usefulness of this approach to constraint propagation by discussing the implementations of both algorithms and their use on various examples, including Boolean constraints, three valued logic of Kleene, constraints dealing with Waltz's language for describing polyhedreal scenes, and Allen's qualitative approach to temporal logic.<|reference_end|>
arxiv
@article{apt1999automatic, title={Automatic Generation of Constraint Propagation Algorithms for Small Finite Domains}, author={Krzysztof R. Apt and Eric Monfroy}, journal={arXiv preprint arXiv:cs/9909010}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909010}, primaryClass={cs.AI cs.PL} }
apt1999automatic
arxiv-676435
cs/9909011
Distributed Algorithms in Multihop Broadcast Networks
<|reference_start|>Distributed Algorithms in Multihop Broadcast Networks: Broadcast networks are often used in modern communication systems. A common broadcast network is a single hop shared media system, where a transmitted message is heard by all neighbors, such as some LAN networks. In this work we consider a more complex environment, in which a transmitted message is heard only by a group of neighbors, such as Ad-Hoc networks, satellite and radio networks, as well as wireless multistation backbone system for mobile communication. It is important to design efficient algorithms for such environments. Our main result is a new Leader Election algorithm, with O(n) time complexity and O(n*lg(n)) message transmission complexity. Our distributed solution uses a propagation of information with feedback (PIF) building block tuned to the broadcast media, and a special counting and joining approach for the election procedure phase. The latter is required for achieving the linear time. It is demonstrated that the broadcast model requires solutions which are different from the known point-to-point model.<|reference_end|>
arxiv
@article{cidon1999distributed, title={Distributed Algorithms in Multihop Broadcast Networks}, author={Israel Cidon and Osnat Mokryn}, journal={arXiv preprint arXiv:cs/9909011}, year={1999}, number={Technical Report CC #241, Center for Communication and Information Technologies, Technion - Israel Institute of Technology}, archivePrefix={arXiv}, eprint={cs/9909011}, primaryClass={cs.DC cs.NI} }
cidon1999distributed
arxiv-676436
cs/9909012
Certificate Revocation Paradigms
<|reference_start|>Certificate Revocation Paradigms: Research in the field of electronic signature confirmation has been active for some 20 years now. Unfortunately present certificate-based solutions also come from that age when no-one knew about online data transmission. The official standardized X.509 framework also depends heavily on offline operations, one of the most complicated ones being certificate revocation handling. This is done via huge Certificate Revocation Lists which are both inconvenient and expencive. Several improvements to these lists are proposed and in this report we try to analyze them briefly. We conclude that although it is possible to do better than in the original X.509 setting, none of the solutions presented this far is good enough.<|reference_end|>
arxiv
@article{willemson1999certificate, title={Certificate Revocation Paradigms}, author={Jan Willemson}, journal={arXiv preprint arXiv:cs/9909012}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909012}, primaryClass={cs.CR} }
willemson1999certificate
arxiv-676437
cs/9909013
Self-stabilizing mutual exclusion on a ring, even if K=N
<|reference_start|>Self-stabilizing mutual exclusion on a ring, even if K=N: We show that, contrary to common belief, Dijkstra's self-stabilizing mutual exclusion algorithm on a ring [Dij74,Dij82] also stabilizes when the number of states per node is one less than the number of nodes on the ring.<|reference_end|>
arxiv
@article{hoepman1999self-stabilizing, title={Self-stabilizing mutual exclusion on a ring, even if K=N}, author={Jaap-Henk Hoepman}, journal={arXiv preprint arXiv:cs/9909013}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909013}, primaryClass={cs.DC} }
hoepman1999self-stabilizing
arxiv-676438
cs/9909014
Reasoning About Common Knowledge with Infinitely Many Agents
<|reference_start|>Reasoning About Common Knowledge with Infinitely Many Agents: Complete axiomatizations and exponential-time decision procedures are provided for reasoning about knowledge and common knowledge when there are infinitely many agents. The results show that reasoning about knowledge and common knowledge with infinitely many agents is no harder than when there are finitely many agents, provided that we can check the cardinality of certain set differences G - G', where G and G' are sets of agents. Since our complexity results are independent of the cardinality of the sets G involved, they represent improvements over the previous results even with the sets of agents involved are finite. Moreover, our results make clear the extent to which issues of complexity and completeness depend on how the sets of agents involved are represented.<|reference_end|>
arxiv
@article{halpern1999reasoning, title={Reasoning About Common Knowledge with Infinitely Many Agents}, author={Joseph Y. Halpern and Richard A. Shore}, journal={arXiv preprint arXiv:cs/9909014}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909014}, primaryClass={cs.LO cs.AI} }
halpern1999reasoning
arxiv-676439
cs/9909015
A decision-theoretic approach to reliable message delivery
<|reference_start|>A decision-theoretic approach to reliable message delivery: We argue that the tools of decision theory need to be taken more seriously in the specification and analysis of systems. We illustrate this by considering a simple problem involving reliable communication, showing how considerations of utility and probability can be used to decide when it is worth sending heartbeat messages and, if they are sent, how often they should be sent.<|reference_end|>
arxiv
@article{chu1999a, title={A decision-theoretic approach to reliable message delivery}, author={Francis C. Chu and Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/9909015}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909015}, primaryClass={cs.DC} }
chu1999a
arxiv-676440
cs/9909016
Least expected cost query optimization: an exercise in utility
<|reference_start|>Least expected cost query optimization: an exercise in utility: We identify two unreasonable, though standard, assumptions made by database query optimizers that can adversely affect the quality of the chosen evaluation plans. One assumption is that it is enough to optimize for the expected case---that is, the case where various parameters (like available memory) take on their expected value. The other assumption is that the parameters are constant throughout the execution of the query. We present an algorithm based on the ``System R''-style query optimization algorithm that does not rely on these assumptions. The algorithm we present chooses the plan of the least expected cost instead of the plan of least cost given some fixed value of the parameters. In execution environments that exhibit a high degree of variability, our techniques should result in better performance.<|reference_end|>
arxiv
@article{chu1999least, title={Least expected cost query optimization: an exercise in utility}, author={Francis C. Chu, Joseph Y. Halpern, and Praveen Seshadri}, journal={arXiv preprint arXiv:cs/9909016}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909016}, primaryClass={cs.DB} }
chu1999least
arxiv-676441
cs/9909017
Finding an ordinary conic and an ordinary hyperplane
<|reference_start|>Finding an ordinary conic and an ordinary hyperplane: Given a finite set of non-collinear points in the plane, there exists a line that passes through exactly two points. Such a line is called an ordinary line. An efficient algorithm for computing such a line was proposed by Mukhopadhyay et al. In this note we extend this result in two directions. We first show how to use this algorithm to compute an ordinary conic, that is, a conic passing through exactly five points, assuming that all the points do not lie on the same conic. Both our proofs of existence and the consequent algorithms are simpler than previous ones. We next show how to compute an ordinary hyperplane in three and higher dimensions.<|reference_end|>
arxiv
@article{devillers1999finding, title={Finding an ordinary conic and an ordinary hyperplane}, author={Olivier Devillers, Asish Mukhopadhyay}, journal={arXiv preprint arXiv:cs/9909017}, year={1999}, number={INRIA Research report 3517}, archivePrefix={arXiv}, eprint={cs/9909017}, primaryClass={cs.CG} }
devillers1999finding
arxiv-676442
cs/9909018
Geometric compression for progressive transmission
<|reference_start|>Geometric compression for progressive transmission: The compression of geometric structures is a relatively new field of data compression. Since about 1995, several articles have dealt with the coding of meshes, using for most of them the following approach: the vertices of the mesh are coded in an order such that it contains partially the topology of the mesh. In the same time, some simple rules attempt to predict the position of the current vertex from the positions of its neighbours that have been previously coded. In this article, we describe a compression algorithm whose principle is completely different: the order of the vertices is used to compress their coordinates, and then the topology of the mesh is reconstructed from the vertices. This algorithm, particularly suited for terrain models, achieves compression factors that are slightly greater than those of the currently available algorithms, and moreover, it allows progressive and interactive transmission of the meshes.<|reference_end|>
arxiv
@article{devillers1999geometric, title={Geometric compression for progressive transmission}, author={Olivier Devillers, Pierre-Maris Gandoin}, journal={arXiv preprint arXiv:cs/9909018}, year={1999}, number={INRIA Research report 3766, in french}, archivePrefix={arXiv}, eprint={cs/9909018}, primaryClass={cs.CG cs.GR} }
devillers1999geometric
arxiv-676443
cs/9909019
Knowledge in Multi-Agent Systems: Initial Configurations and Broadcast
<|reference_start|>Knowledge in Multi-Agent Systems: Initial Configurations and Broadcast: The semantic framework for the modal logic of knowledge due to Halpern and Moses provides a way to ascribe knowledge to agents in distributed and multi-agent systems. In this paper we study two special cases of this framework: full systems and hypercubes. Both model static situations in which no agent has any information about another agent's state. Full systems and hypercubes are an appropriate model for the initial configurations of many systems of interest. We establish a correspondence between full systems and hypercube systems and certain classes of Kripke frames. We show that these classes of systems correspond to the same logic. Moreover, this logic is also the same as that generated by the larger class of weakly directed frames. We provide a sound and complete axiomatization, S5WDn, of this logic. Finally, we show that under certain natural assumptions, in a model where knowledge evolves over time, S5WDn characterizes the properties of knowledge not just at the initial configuration, but also at all later configurations. In particular, this holds for homogeneous broadcast systems, which capture settings in which agents are initially ignorant of each others local states, operate synchronously, have perfect recall and can communicate only by broadcasting.<|reference_end|>
arxiv
@article{lomuscio1999knowledge, title={Knowledge in Multi-Agent Systems: Initial Configurations and Broadcast}, author={A. R. Lomuscio, R. van der Meyden, M. D. Ryan}, journal={arXiv preprint arXiv:cs/9909019}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909019}, primaryClass={cs.LO cs.AI} }
lomuscio1999knowledge
arxiv-676444
cs/9909020
Query Order
<|reference_start|>Query Order: We study the effect of query order on computational power, and show that $\pjk$-the languages computable via a polynomial-time machine given one query to the jth level of the boolean hierarchy followed by one query to the kth level of the boolean hierarchy-equals $\redttnp{j+2k-1}$ if j is even and k is odd, and equals $\redttnp{j+2k}$ otherwise. Thus, unless the polynomial hierarchy collapses, it holds that for each $1\leq j \leq k$: $\pjk = \pkj \iff (j=k) \lor (j{is even} \land k=j+1)$. We extend our analysis to apply to more general query classes.<|reference_end|>
arxiv
@article{hemaspaandra1999query, title={Query Order}, author={Lane A. Hemaspaandra, Harald Hempel, and Gerd Wechsung}, journal={SIAM Journal on Computing, 28, 637-651, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9909020}, primaryClass={cs.CC} }
hemaspaandra1999query
arxiv-676445
cs/9910001
Fixed-parameter tractability, definability, and model checking
<|reference_start|>Fixed-parameter tractability, definability, and model checking: In this article, we study parameterized complexity theory from the perspective of logic, or more specifically, descriptive complexity theory. We propose to consider parameterized model-checking problems for various fragments of first-order logic as generic parameterized problems and show how this approach can be useful in studying both fixed-parameter tractability and intractability. For example, we establish the equivalence between the model-checking for existential first-order logic, the homomorphism problem for relational structures, and the substructure isomorphism problem. Our main tractability result shows that model-checking for first-order formulas is fixed-parameter tractable when restricted to a class of input structures with an excluded minor. On the intractability side, for every t >= 0 we prove an equivalence between model-checking for first-order formulas with t quantifier alternations and the parameterized halting problem for alternating Turing machines with t alternations. We discuss the close connection between this alternation hierarchy and Downey and Fellows' W-hierarchy. On a more abstract level, we consider two forms of definability, called Fagin definability and slicewise definability, that are appropriate for describing parameterized problems. We give a characterization of the class FPT of all fixed-parameter tractable problems in terms of slicewise definability in finite variable least fixed-point logic, which is reminiscent of the Immerman-Vardi Theorem characterizing the class PTIME in terms of definability in least fixed-point logic.<|reference_end|>
arxiv
@article{flum1999fixed-parameter, title={Fixed-parameter tractability, definability, and model checking}, author={Joerg Flum and Martin Grohe}, journal={arXiv preprint arXiv:cs/9910001}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910001}, primaryClass={cs.CC cs.LO} }
flum1999fixed-parameter
arxiv-676446
cs/9910002
What's Up with Downward Collapse: Using the Easy-Hard Technique to Link Boolean and Polynomial Hierarchy Collapses
<|reference_start|>What's Up with Downward Collapse: Using the Easy-Hard Technique to Link Boolean and Polynomial Hierarchy Collapses: During the past decade, nine papers have obtained increasingly strong consequences from the assumption that boolean or bounded-query hierarchies collapse. The final four papers of this nine-paper progression actually achieve downward collapse---that is, they show that high-level collapses induce collapses at (what beforehand were thought to be) lower complexity levels. For example, for each $k\geq 2$ it is now known that if $\psigkone=\psigktwo$ then $\ph=\sigmak$. This article surveys the history, the results, and the technique---the so-called easy-hard method---of these nine papers.<|reference_end|>
arxiv
@article{hemaspaandra1999what's, title={What's Up with Downward Collapse: Using the Easy-Hard Technique to Link Boolean and Polynomial Hierarchy Collapses}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={arXiv preprint arXiv:cs/9910002}, year={1999}, number={UR-CS-TR-98-682}, archivePrefix={arXiv}, eprint={cs/9910002}, primaryClass={cs.CC} }
hemaspaandra1999what's
arxiv-676447
cs/9910003
R_1-tt^SN(NP) Distinguishes Robust Many-One and Turing Completeness
<|reference_start|>R_1-tt^SN(NP) Distinguishes Robust Many-One and Turing Completeness: Do complexity classes have many-one complete sets if and only if they have Turing-complete sets? We prove that there is a relativized world in which a relatively natural complexity class-namely a downward closure of NP, \rsnnp - has Turing-complete sets but has no many-one complete sets. In fact, we show that in the same relativized world this class has 2-truth-table complete sets but lacks 1-truth-table complete sets. As part of the groundwork for our result, we prove that \rsnnp has many equivalent forms having to do with ordered and parallel access to $\np$ and $\npinterconp$.<|reference_end|>
arxiv
@article{hemaspaandra1999r_{1-tt}^{sn}(np), title={R_{1-tt}^{SN}(NP) Distinguishes Robust Many-One and Turing Completeness}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={Theory of Computing Systems, 31, 307-325, 1998}, year={1999}, number={earlier version appears as UR-CS-TR-96-635}, archivePrefix={arXiv}, eprint={cs/9910003}, primaryClass={cs.CC} }
hemaspaandra1999r_{1-tt}^{sn}(np)
arxiv-676448
cs/9910004
An Introduction to Query Order
<|reference_start|>An Introduction to Query Order: Hemaspaandra, Hempel, and Wechsung [cs.CC/9909020] raised the following questions: If one is allowed one question to each of two different information sources, does the order in which one asks the questions affect the class of problems that one can solve with the given access? If so, which order yields the greater computational power? The answers to these questions have been learned-inasfar as they can be learned without resolving whether or not the polynomial hierarchy collapses-for both the polynomial hierarchy and the boolean hierarchy. In the polynomial hierarchy, query order never matters. In the boolean hierarchy, query order sometimes does not matter and, unless the polynomial hierarchy collapses, sometimes does matter. Furthermore, the study of query order has yielded dividends in seemingly unrelated areas, such as bottleneck computations and downward translation of equality. In this article, we present some of the central results on query order. The article is written in such a way as to encourage the reader to try his or her own hand at proving some of these results. We also give literature pointers to the quickly growing set of related results and applications.<|reference_end|>
arxiv
@article{hemaspaandra1999an, title={An Introduction to Query Order}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={Bulletin of the EATCS, 63, 93-107, 1997}, year={1999}, number={earlier version appears as UR-CS-TR-97-665}, archivePrefix={arXiv}, eprint={cs/9910004}, primaryClass={cs.CC} }
hemaspaandra1999an
arxiv-676449
cs/9910005
Query Order and the Polynomial Hierarchy
<|reference_start|>Query Order and the Polynomial Hierarchy: Hemaspaandra, Hempel, and Wechsung [cs.CC/9909020] initiated the field of query order, which studies the ways in which computational power is affected by the order in which information sources are accessed. The present paper studies, for the first time, query order as it applies to the levels of the polynomial hierarchy. We prove that the levels of the polynomial hierarchy are order-oblivious. Yet, we also show that these ordered query classes form new levels in the polynomial hierarchy unless the polynomial hierarchy collapses. We prove that all leaf language classes - and thus essentially all standard complexity classes - inherit all order-obliviousness results that hold for P.<|reference_end|>
arxiv
@article{hemaspaandra1999query, title={Query Order and the Polynomial Hierarchy}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={Journal of Universal Computer Science, 4, 574-588, 1998}, year={1999}, number={earlier version appears as UR-CS-TR-96-634}, archivePrefix={arXiv}, eprint={cs/9910005}, primaryClass={cs.CC} }
hemaspaandra1999query
arxiv-676450
cs/9910006
Self-Specifying Machines
<|reference_start|>Self-Specifying Machines: We study the computational power of machines that specify their own acceptance types, and show that they accept exactly the languages that $\manyonesharp$-reduce to NP sets. A natural variant accepts exactly the languages that $\manyonesharp$-reduce to P sets. We show that these two classes coincide if and only if $\psone = \psnnoplusbigohone$, where the latter class denotes the sets acceptable via at most one question to $\sharpp$ followed by at most a constant number of questions to $\np$.<|reference_end|>
arxiv
@article{hemaspaandra1999self-specifying, title={Self-Specifying Machines}, author={Lane A. Hemaspaandra, Harald Hempel, and Gerd Wechsung}, journal={arXiv preprint arXiv:cs/9910006}, year={1999}, number={earlier version appears as UR-CS-TR-97-654}, archivePrefix={arXiv}, eprint={cs/9910006}, primaryClass={cs.CC} }
hemaspaandra1999self-specifying
arxiv-676451
cs/9910007
A Downward Collapse within the Polynomial Hierarchy
<|reference_start|>A Downward Collapse within the Polynomial Hierarchy: Downward collapse (a.k.a. upward separation) refers to cases where the equality of two larger classes implies the equality of two smaller classes. We provide an unqualified downward collapse result completely within the polynomial hierarchy. In particular, we prove that, for k > 2, if $\psigkone = \psigktwo$ then $\sigmak = \pik = \ph$. We extend this to obtain a more general downward collapse result.<|reference_end|>
arxiv
@article{hemaspaandra1999a, title={A Downward Collapse within the Polynomial Hierarchy}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={SIAM Journal on Computing, 28, 383-393, 1999}, year={1999}, number={earlier version appears as UR-CS-TR-96-630}, archivePrefix={arXiv}, eprint={cs/9910007}, primaryClass={cs.CC} }
hemaspaandra1999a
arxiv-676452
cs/9910008
Translating Equality Downwards
<|reference_start|>Translating Equality Downwards: Downward translation of equality refers to cases where a collapse of some pair of complexity classes would induce a collapse of some other pair of complexity classes that (a priori) one expects are smaller. Recently, the first downward translation of equality was obtained that applied to the polynomial hierarchy-in particular, to bounded access to its levels [cs.CC/9910007]. In this paper, we provide a much broader downward translation that extends not only that downward translation but also that translation's elegant enhancement by Buhrman and Fortnow. Our work also sheds light on previous research on the structure of refined polynomial hierarchies, and strengthens the connection between the collapse of bounded query hierarchies and the collapse of the polynomial hierarchy.<|reference_end|>
arxiv
@article{hemaspaandra1999translating, title={Translating Equality Downwards}, author={Edith Hemaspaandra, Lane A. Hemaspaandra, and Harald Hempel}, journal={arXiv preprint arXiv:cs/9910008}, year={1999}, number={earlier version appears as UR-CS-TR-97-657}, archivePrefix={arXiv}, eprint={cs/9910008}, primaryClass={cs.CC} }
hemaspaandra1999translating
arxiv-676453
cs/9910009
Locked and Unlocked Polygonal Chains in 3D
<|reference_start|>Locked and Unlocked Polygonal Chains in 3D: In this paper, we study movements of simple polygonal chains in 3D. We say that an open, simple polygonal chain can be straightened if it can be continuously reconfigured to a straight sequence of segments in such a manner that both the length of each link and the simplicity of the chain are maintained throughout the movement. The analogous concept for closed chains is convexification: reconfiguration to a planar convex polygon. Chains that cannot be straightened or convexified are called locked. While there are open chains in 3D that are locked, we show that if an open chain has a simple orthogonal projection onto some plane, it can be straightened. For closed chains, we show that there are unknotted but locked closed chains, and we provide an algorithm for convexifying a planar simple polygon in 3D. All our algorithms require only O(n) basic ``moves'' and run in linear time.<|reference_end|>
arxiv
@article{biedl1999locked, title={Locked and Unlocked Polygonal Chains in 3D}, author={T. Biedl, E. Demaine, M. Demaine, S. Lazard, A. Lubiw, J. O'Rourke, M. Overmars, S. Robbins, I. Streinu, G. Toussaint, S. Whitesides}, journal={arXiv preprint arXiv:cs/9910009}, year={1999}, number={Smith Tech. Rep. 060}, archivePrefix={arXiv}, eprint={cs/9910009}, primaryClass={cs.CG cs.DM} }
biedl1999locked
arxiv-676454
cs/9910010
Communication Complexity Lower Bounds by Polynomials
<|reference_start|>Communication Complexity Lower Bounds by Polynomials: The quantum version of communication complexity allows the two communicating parties to exchange qubits and/or to make use of prior entanglement (shared EPR-pairs). Some lower bound techniques are available for qubit communication complexity, but except for the inner product function, no bounds are known for the model with unlimited prior entanglement. We show that the log-rank lower bound extends to the strongest model (qubit communication + unlimited prior entanglement). By relating the rank of the communication matrix to properties of polynomials, we are able to derive some strong bounds for exact protocols. In particular, we prove both the "log-rank conjecture" and the polynomial equivalence of quantum and classical communication complexity for various classes of functions. We also derive some weaker bounds for bounded-error quantum protocols.<|reference_end|>
arxiv
@article{buhrman1999communication, title={Communication Complexity Lower Bounds by Polynomials}, author={Harry Buhrman (CWI, Amsterdam) and Ronald de Wolf (CWI and U of Amsterdam)}, journal={arXiv preprint arXiv:cs/9910010}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910010}, primaryClass={cs.CC quant-ph} }
buhrman1999communication
arxiv-676455
cs/9910011
A statistical model for word discovery in child directed speech
<|reference_start|>A statistical model for word discovery in child directed speech: A statistical model for segmentation and word discovery in child directed speech is presented. An incremental unsupervised learning algorithm to infer word boundaries based on this model is described and results of empirical tests showing that the algorithm is competitive with other models that have been used for similar tasks are also presented.<|reference_end|>
arxiv
@article{venkataraman1999a, title={A statistical model for word discovery in child directed speech}, author={Anand Venkataraman}, journal={arXiv preprint arXiv:cs/9910011}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910011}, primaryClass={cs.CL cs.LG} }
venkataraman1999a
arxiv-676456
cs/9910012
The Complexity of Temporal Logic over the Reals
<|reference_start|>The Complexity of Temporal Logic over the Reals: It is shown that the decision problem for the temporal logic with until and since connectives over real-numbers time is PSPACE-complete.<|reference_end|>
arxiv
@article{reynolds1999the, title={The Complexity of Temporal Logic over the Reals}, author={M. Reynolds}, journal={arXiv preprint arXiv:cs/9910012}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910012}, primaryClass={cs.LO cs.CC} }
reynolds1999the
arxiv-676457
cs/9910013
Map Graphs
<|reference_start|>Map Graphs: We consider a modified notion of planarity, in which two nations of a map are considered adjacent when they share any point of their boundaries (not necessarily an edge, as planarity requires). Such adjacencies define a map graph. We give an NP characterization for such graphs, and a cubic time recognition algorithm for a restricted version: given a graph, decide whether it is realized by adjacencies in a map without holes, in which at most four nations meet at any point.<|reference_end|>
arxiv
@article{chen1999map, title={Map Graphs}, author={Zhi-Zhong Chen, Michelangelo Grigni, Christos Papadimitriou}, journal={arXiv preprint arXiv:cs/9910013}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910013}, primaryClass={cs.DM cs.DS} }
chen1999map
arxiv-676458
cs/9910014
Processor Verification Using Efficient Reductions of the Logic of Uninterpreted Functions to Propositional Logic
<|reference_start|>Processor Verification Using Efficient Reductions of the Logic of Uninterpreted Functions to Propositional Logic: The logic of equality with uninterpreted functions (EUF) provides a means of abstracting the manipulation of data by a processor when verifying the correctness of its control logic. By reducing formulas in this logic to propositional formulas, we can apply Boolean methods such as Ordered Binary Decision Diagrams (BDDs) and Boolean satisfiability checkers to perform the verification. We can exploit characteristics of the formulas describing the verification conditions to greatly simplify the propositional formulas generated. In particular, we exploit the property that many equations appear only in positive form. We can therefore reduce the set of interpretations of the function symbols that must be considered to prove that a formula is universally valid to those that are ``maximally diverse.'' We present experimental results demonstrating the efficiency of this approach when verifying pipelined processors using the method proposed by Burch and Dill.<|reference_end|>
arxiv
@article{bryant1999processor, title={Processor Verification Using Efficient Reductions of the Logic of Uninterpreted Functions to Propositional Logic}, author={Randal E. Bryant, Steven German, Miroslav N. Velev}, journal={arXiv preprint arXiv:cs/9910014}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910014}, primaryClass={cs.LO cs.AR} }
bryant1999processor
arxiv-676459
cs/9910015
PIPE: Personalizing Recommendations via Partial Evaluation
<|reference_start|>PIPE: Personalizing Recommendations via Partial Evaluation: It is shown that personalization of web content can be advantageously viewed as a form of partial evaluation --- a technique well known in the programming languages community. The basic idea is to model a recommendation space as a program, then partially evaluate this program with respect to user preferences (and features) to obtain specialized content. This technique supports both content-based and collaborative approaches, and is applicable to a range of applications that require automatic information integration from multiple web sources. The effectiveness of this methodology is illustrated by two example applications --- (i) personalizing content for visitors to the Blacksburg Electronic Village (http://www.bev.net), and (ii) locating and selecting scientific software on the Internet. The scalability of this technique is demonstrated by its ability to interface with online web ontologies that index thousands of web pages.<|reference_end|>
arxiv
@article{ramakrishnan1999pipe:, title={PIPE: Personalizing Recommendations via Partial Evaluation}, author={Naren Ramakrishnan}, journal={arXiv preprint arXiv:cs/9910015}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910015}, primaryClass={cs.IR cs.AI} }
ramakrishnan1999pipe:
arxiv-676460
cs/9910016
Probabilistic Agent Programs
<|reference_start|>Probabilistic Agent Programs: Agents are small programs that autonomously take actions based on changes in their environment or ``state.'' Over the last few years, there have been an increasing number of efforts to build agents that can interact and/or collaborate with other agents. In one of these efforts, Eiter, Subrahmanian amd Pick (AIJ, 108(1-2), pages 179-255) have shown how agents may be built on top of legacy code. However, their framework assumes that agent states are completely determined, and there is no uncertainty in an agent's state. Thus, their framework allows an agent developer to specify how his agents will react when the agent is 100% sure about what is true/false in the world state. In this paper, we propose the concept of a \emph{probabilistic agent program} and show how, given an arbitrary program written in any imperative language, we may build a declarative ``probabilistic'' agent program on top of it which supports decision making in the presence of uncertainty. We provide two alternative semantics for probabilistic agent programs. We show that the second semantics, though more epistemically appealing, is more complex to compute. We provide sound and complete algorithms to compute the semantics of \emph{positive} agent programs.<|reference_end|>
arxiv
@article{dix1999probabilistic, title={Probabilistic Agent Programs}, author={Juergen Dix, Mirco Nanni, VS Subrahmanian}, journal={arXiv preprint arXiv:cs/9910016}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910016}, primaryClass={cs.AI} }
dix1999probabilistic
arxiv-676461
cs/9910017
Finite-resolution hidden surface removal
<|reference_start|>Finite-resolution hidden surface removal: We propose a hybrid image-space/object-space solution to the classical hidden surface removal problem: Given n disjoint triangles in Real^3 and p sample points (``pixels'') in the xy-plane, determine the first triangle directly behind each pixel. Our algorithm constructs the sampled visibility map of the triangles with respect to the pixels, which is the subset of the trapezoids in a trapezoidal decomposition of the analytic visibility map that contain at least one pixel. The sampled visibility map adapts to local changes in image complexity, and its complexity is bounded both by the number of pixels and by the complexity of the analytic visibility map. Our algorithm runs in time O(n^{1+e} + n^{2/3+e}t^{2/3} + p), where t is the output size and e is any positive constant. This is nearly optimal in the worst case and compares favorably with the best output-sensitive algorithms for both ray casting and analytic hidden surface removal. In the special case where the pixels form a regular grid, a sweepline variant of our algorithm runs in time O(n^{1+e} + n^{2/3+e}t^{2/3} + t log p), which is usually sublinear in the number of pixels.<|reference_end|>
arxiv
@article{erickson1999finite-resolution, title={Finite-resolution hidden surface removal}, author={Jeff Erickson}, journal={arXiv preprint arXiv:cs/9910017}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910017}, primaryClass={cs.CG cs.GR} }
erickson1999finite-resolution
arxiv-676462
cs/9910018
Decoupling Control from Data for TCP Congestion Control
<|reference_start|>Decoupling Control from Data for TCP Congestion Control: Many applications want to use TCP congestion control to regulate the transmission rate of a data packet stream. A natural way to achieve this goal is to transport the data packet stream on a TCP connection. However, because TCP implements both congestion and error control, transporting a data packet stream directly using a TCP connection forces the data packet stream to be subject to TCP's other properties caused by TCP error control, which may be inappropriate for these applications. The TCP decoupling approach proposed in this thesis is a novel way of applying TCP congestion control to a data packet stream without actually transporting the data packet stream on a TCP connection. Instead, a TCP connection using the same network path as the data packet stream is set up separately and the transmission rate of the data packet stream is then associated with that of the TCP packets. Since the transmission rate of these TCP packets is under TCP congestion control, so is that of the data packet stream. Furthermore, since the data packet stream is not transported on a TCP connection, the regulated data packet stream is not subject to TCP error control. Because of this flexibility, the TCP decoupling approach opens up many new opportunities, solves old problems, and improves the performance of some existing applications. All of these advantages will be demonstrated in the thesis. This thesis presents the design, implementation, and analysis of the TCP decoupling approach, and its successful applications in TCP trunking, wireless communication, and multimedia streaming.<|reference_end|>
arxiv
@article{wang1999decoupling, title={Decoupling Control from Data for TCP Congestion Control}, author={S.Y. Wang}, journal={arXiv preprint arXiv:cs/9910018}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910018}, primaryClass={cs.NI} }
wang1999decoupling
arxiv-676463
cs/9910019
Consistent Checkpointing in Distributed Databases: Towards a Formal Approach
<|reference_start|>Consistent Checkpointing in Distributed Databases: Towards a Formal Approach: Whether it is for audit or for recovery purposes, data checkpointing is an important problem of distributed database systems. Actually, transactions establish dependence relations on data checkpoints taken by data object managers. So, given an arbitrary set of data checkpoints (including at least a single data checkpoint from a data manager, and at most a data checkpoint from each data manager), an important question is the following one: ``Can these data checkpoints be members of a same consistent global checkpoint?''. This paper answers this question by providing a necessary and sufficient condition suited for database systems. Moreover, to show the usefulness of this condition, two {\em non-intrusive} data checkpointing protocols are derived from this condition. It is also interesting to note that this paper, by exhibiting ``correspondences'', establishes a bridge between the data object/transaction model and the process/message-passing model.<|reference_end|>
arxiv
@article{baldoni1999consistent, title={Consistent Checkpointing in Distributed Databases: Towards a Formal Approach}, author={R.Baldoni, F. Quaglia, and M.Raynal}, journal={arXiv preprint arXiv:cs/9910019}, year={1999}, number={Rapporto di Ricerca, Dipartimento di Informatica e Sistemistica, Universita' di Roma "La Sapienza"-(Italy) n. 27-97, July 1997}, archivePrefix={arXiv}, eprint={cs/9910019}, primaryClass={cs.DB cs.DC} }
baldoni1999consistent
arxiv-676464
cs/9910020
Selective Sampling for Example-based Word Sense Disambiguation
<|reference_start|>Selective Sampling for Example-based Word Sense Disambiguation: This paper proposes an efficient example sampling method for example-based word sense disambiguation systems. To construct a database of practical size, a considerable overhead for manual sense disambiguation (overhead for supervision) is required. In addition, the time complexity of searching a large-sized database poses a considerable problem (overhead for search). To counter these problems, our method selectively samples a smaller-sized effective subset from a given example set for use in word sense disambiguation. Our method is characterized by the reliance on the notion of training utility: the degree to which each example is informative for future example sampling when used for the training of the system. The system progressively collects examples by selecting those with greatest utility. The paper reports the effectiveness of our method through experiments on about one thousand sentences. Compared to experiments with other example sampling methods, our method reduced both the overhead for supervision and the overhead for search, without the degeneration of the performance of the system.<|reference_end|>
arxiv
@article{fujii1999selective, title={Selective Sampling for Example-based Word Sense Disambiguation}, author={Atsushi Fujii, Kentaro Inui, Takenobu Tokunaga, and Hozumi Tanaka}, journal={Computational Linguistics, Vol.24, No.4, pp.573-597, 1998}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910020}, primaryClass={cs.CL} }
fujii1999selective
arxiv-676465
cs/9910021
Efficient and Extensible Algorithms for Multi Query Optimization
<|reference_start|>Efficient and Extensible Algorithms for Multi Query Optimization: Complex queries are becoming commonplace, with the growing use of decision support systems. These complex queries often have a lot of common sub-expressions, either within a single query, or across multiple such queries run as a batch. Multi-query optimization aims at exploiting common sub-expressions to reduce evaluation cost. Multi-query optimization has hither-to been viewed as impractical, since earlier algorithms were exhaustive, and explore a doubly exponential search space. In this paper we demonstrate that multi-query optimization using heuristics is practical, and provides significant benefits. We propose three cost-based heuristic algorithms: Volcano-SH and Volcano-RU, which are based on simple modifications to the Volcano search strategy, and a greedy heuristic. Our greedy heuristic incorporates novel optimizations that improve efficiency greatly. Our algorithms are designed to be easily added to existing optimizers. We present a performance study comparing the algorithms, using workloads consisting of queries from the TPC-D benchmark. The study shows that our algorithms provide significant benefits over traditional optimization, at a very acceptable overhead in optimization time.<|reference_end|>
arxiv
@article{roy1999efficient, title={Efficient and Extensible Algorithms for Multi Query Optimization}, author={Prasan Roy, S. Seshadri, S. Sudarshan, Siddhesh Bhobe}, journal={arXiv preprint arXiv:cs/9910021}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910021}, primaryClass={cs.DB} }
roy1999efficient
arxiv-676466
cs/9910022
Practical experiments with regular approximation of context-free languages
<|reference_start|>Practical experiments with regular approximation of context-free languages: Several methods are discussed that construct a finite automaton given a context-free grammar, including both methods that lead to subsets and those that lead to supersets of the original context-free language. Some of these methods of regular approximation are new, and some others are presented here in a more refined form with respect to existing literature. Practical experiments with the different methods of regular approximation are performed for spoken-language input: hypotheses from a speech recognizer are filtered through a finite automaton.<|reference_end|>
arxiv
@article{nederhof1999practical, title={Practical experiments with regular approximation of context-free languages}, author={Mark-Jan Nederhof}, journal={arXiv preprint arXiv:cs/9910022}, year={1999}, archivePrefix={arXiv}, eprint={cs/9910022}, primaryClass={cs.CL} }
nederhof1999practical
arxiv-676467
cs/9910023
A System of Interaction and Structure
<|reference_start|>A System of Interaction and Structure: This paper introduces a logical system, called BV, which extends multiplicative linear logic by a non-commutative self-dual logical operator. This extension is particularly challenging for the sequent calculus, and so far it is not achieved therein. It becomes very natural in a new formalism, called the calculus of structures, which is the main contribution of this work. Structures are formulae submitted to certain equational laws typical of sequents. The calculus of structures is obtained by generalising the sequent calculus in such a way that a new top-down symmetry of derivations is observed, and it employs inference rules that rewrite inside structures at any depth. These properties, in addition to allow the design of BV, yield a modular proof of cut elimination.<|reference_end|>
arxiv
@article{guglielmi1999a, title={A System of Interaction and Structure}, author={Alessio Guglielmi}, journal={ACM Transactions on Computational Logic, Vol. 8 (1:1), 2007, pp. 1-64}, year={1999}, doi={10.1145/1182613.1182614}, archivePrefix={arXiv}, eprint={cs/9910023}, primaryClass={cs.LO} }
guglielmi1999a
arxiv-676468
cs/9910024
On Reconfiguring Tree Linkages: Trees can Lock
<|reference_start|>On Reconfiguring Tree Linkages: Trees can Lock: It has recently been shown that any simple (i.e. nonintersecting) polygonal chain in the plane can be reconfigured to lie on a straight line, and any simple polygon can be reconfigured to be convex. This result cannot be extended to tree linkages: we show that there are trees with two simple configurations that are not connected by a motion that preserves simplicity throughout the motion. Indeed, we prove that an $N$-link tree can have $2^{\Omega(N)}$ equivalence classes of configurations.<|reference_end|>
arxiv
@article{biedl1999on, title={On Reconfiguring Tree Linkages: Trees can Lock}, author={Therese Biedl, Erik Demaine, Martin Demaine, Sylvain Lazard, Anna Lubiw, Joseph O'Rourke, Steve Robbins, Ileana Streinu, Godfried Toussaint, Sue Whitesides}, journal={arXiv preprint arXiv:cs/9910024}, year={1999}, number={SOCS-00.7}, archivePrefix={arXiv}, eprint={cs/9910024}, primaryClass={cs.CG cs.DM} }
biedl1999on
arxiv-676469
cs/9911001
Semantics of Programming Languages: A Tool-Oriented Approach
<|reference_start|>Semantics of Programming Languages: A Tool-Oriented Approach: By paying more attention to semantics-based tool generation, programming language semantics can significantly increase its impact. Ultimately, this may lead to ``Language Design Assistants'' incorporating substantial amounts of semantic knowledge.<|reference_end|>
arxiv
@article{heering1999semantics, title={Semantics of Programming Languages: A Tool-Oriented Approach}, author={Jan Heering, Paul Klint}, journal={ACM SIGPLAN Notices V. 35(3) March 2000 pp. 39-48}, year={1999}, number={SEN-R9920 (CWI, Amsterdam)}, archivePrefix={arXiv}, eprint={cs/9911001}, primaryClass={cs.PL} }
heering1999semantics
arxiv-676470
cs/9911002
Numeration systems on a regular language: Arithmetic operations, Recognizability and Formal power series
<|reference_start|>Numeration systems on a regular language: Arithmetic operations, Recognizability and Formal power series: Generalizations of numeration systems in which N is recognizable by a finite automaton are obtained by describing a lexicographically ordered infinite regular language L over a finite alphabet A. For these systems, we obtain a characterization of recognizable sets of integers in terms of rational formal series. We also show that, if the complexity of L is Theta (n^q) (resp. if L is the complement of a polynomial language), then multiplication by an integer k preserves recognizability only if k=t^{q+1} (resp. if k is not a power of the cardinality of A) for some integer t. Finally, we obtain sufficient conditions for the notions of recognizability and U-recognizability to be equivalent, where U is some positional numeration system related to a sequence of integers.<|reference_end|>
arxiv
@article{rigo1999numeration, title={Numeration systems on a regular language: Arithmetic operations, Recognizability and Formal power series}, author={Michel Rigo}, journal={Theoret. Comput. Sci. 269 (2001) 469--498}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911002}, primaryClass={cs.CC} }
rigo1999numeration
arxiv-676471
cs/9911003
Subgraph Isomorphism in Planar Graphs and Related Problems
<|reference_start|>Subgraph Isomorphism in Planar Graphs and Related Problems: We solve the subgraph isomorphism problem in planar graphs in linear time, for any pattern of constant size. Our results are based on a technique of partitioning the planar graph into pieces of small tree-width, and applying dynamic programming within each piece. The same methods can be used to solve other planar graph problems including connectivity, diameter, girth, induced subgraph isomorphism, and shortest paths.<|reference_end|>
arxiv
@article{eppstein1999subgraph, title={Subgraph Isomorphism in Planar Graphs and Related Problems}, author={David Eppstein}, journal={J. Graph Algorithms & Applications 3(3):1-27, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911003}, primaryClass={cs.DS} }
eppstein1999subgraph
arxiv-676472
cs/9911004
Graph Ramsey games
<|reference_start|>Graph Ramsey games: We consider combinatorial avoidance and achievement games based on graph Ramsey theory: The players take turns in coloring still uncolored edges of a graph G, each player being assigned a distinct color, choosing one edge per move. In avoidance games, completing a monochromatic subgraph isomorphic to another graph A leads to immediate defeat or is forbidden and the first player that cannot move loses. In the avoidance+ variants, both players are free to choose more than one edge per move. In achievement games, the first player that completes a monochromatic subgraph isomorphic to A wins. Erdos & Selfridge (1973) were the first to identify some tractable subcases of these games, followed by a large number of further studies. We complete these investigations by settling the complexity of all unrestricted cases: We prove that general graph Ramsey avoidance, avoidance+, and achievement games and several variants thereof are PSPACE-complete. We ultra-strongly solve some nontrivial instances of graph Ramsey avoidance games that are based on symmetric binary Ramsey numbers and provide strong evidence that all other cases based on symmetric binary Ramsey numbers are effectively intractable. Keywords: combinatorial games, graph Ramsey theory, Ramsey game, PSPACE-completeness, complexity, edge coloring, winning strategy, achievement game, avoidance game, the game of Sim, Polya's enumeration formula, probabilistic counting, machine learning, heuristics, Java applet<|reference_end|>
arxiv
@article{slany1999graph, title={Graph Ramsey games}, author={Wolfgang Slany}, journal={arXiv preprint arXiv:cs/9911004}, year={1999}, number={DBAI-TR-99-34}, archivePrefix={arXiv}, eprint={cs/9911004}, primaryClass={cs.CC cs.DM math.CO} }
slany1999graph
arxiv-676473
cs/9911005
What Next? A Dozen Information-Technology Research Goals
<|reference_start|>What Next? A Dozen Information-Technology Research Goals: Charles Babbage's vision of computing has largely been realized. We are on the verge of realizing Vannevar Bush's Memex. But, we are some distance from passing the Turing Test. These three visions and their associated problems have provided long-range research goals for many of us. For example, the scalability problem has motivated me for several decades. This talk defines a set of fundamental research problems that broaden the Babbage, Bush, and Turing visions. They extend Babbage's computational goal to include highly-secure, highly-available, self-programming, self-managing, and self-replicating systems. They extend Bush's Memex vision to include a system that automatically organizes, indexes, digests, evaluates, and summarizes information (as well as a human might). Another group of problems extends Turing's vision of intelligent machines to include prosthetic vision, speech, hearing, and other senses. Each problem is simply stated and each is orthogonal from the others, though they share some common core technologies<|reference_end|>
arxiv
@article{gray1999what, title={What Next? A Dozen Information-Technology Research Goals}, author={Jim Gray}, journal={arXiv preprint arXiv:cs/9911005}, year={1999}, number={MS TR 99-50}, archivePrefix={arXiv}, eprint={cs/9911005}, primaryClass={cs.GL} }
gray1999what
arxiv-676474
cs/9911006
Question Answering System Using Syntactic Information
<|reference_start|>Question Answering System Using Syntactic Information: Question answering task is now being done in TREC8 using English documents. We examined question answering task in Japanese sentences. Our method selects the answer by matching the question sentence with knowledge-based data written in natural language. We use syntactic information to obtain highly accurate answers.<|reference_end|>
arxiv
@article{murata1999question, title={Question Answering System Using Syntactic Information}, author={M. Murata, M. Utiyama, H. Isahara (CRL)}, journal={arXiv preprint arXiv:cs/9911006}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911006}, primaryClass={cs.CL} }
murata1999question
arxiv-676475
cs/9911007
One-Way Functions in Worst-Case Cryptography: Algebraic and Security Properties
<|reference_start|>One-Way Functions in Worst-Case Cryptography: Algebraic and Security Properties: We survey recent developments in the study of (worst-case) one-way functions having strong algebraic and security properties. According to [RS93], this line of research was initiated in 1984 by Rivest and Sherman who designed two-party secret-key agreement protocols that use strongly noninvertible, total, associative one-way functions as their key building blocks. If commutativity is added as an ingredient, these protocols can be used by more than two parties, as noted by Rabi and Sherman [RS93] who also developed digital signature protocols that are based on such enhanced one-way functions. Until recently, it was an open question whether one-way functions having the algebraic and security properties that these protocols require could be created from any given one-way function. Recently, Hemaspaandra and Rothe [HR99] resolved this open issue in the affirmative, by showing that one-way functions exist if and only if strong, total, commutative, associative one-way functions exist. We discuss this result, and the work of Rabi, Rivest, and Sherman, and recent work of Homan [Hom99] that makes progress on related issues.<|reference_end|>
arxiv
@article{beygelzimer1999one-way, title={One-Way Functions in Worst-Case Cryptography: Algebraic and Security Properties}, author={A. Beygelzimer, L. A. Hemaspaandra, C. M. Homan and J. Rothe}, journal={arXiv preprint arXiv:cs/9911007}, year={1999}, number={University of Rochester Technical Report UR-CS TR 722}, archivePrefix={arXiv}, eprint={cs/9911007}, primaryClass={cs.CC cs.CR} }
beygelzimer1999one-way
arxiv-676476
cs/9911008
On quantum and classical space-bounded processes with algebraic transition amplitudes
<|reference_start|>On quantum and classical space-bounded processes with algebraic transition amplitudes: We define a class of stochastic processes based on evolutions and measurements of quantum systems, and consider the complexity of predicting their long-term behavior. It is shown that a very general class of decision problems regarding these stochastic processes can be efficiently solved classically in the space-bounded case. The following corollaries are implied by our main result: (1) Any space O(s) uniform family of quantum circuits acting on s qubits and consisting of unitary gates and measurement gates defined in a typical way by matrices of algebraic numbers can be simulated by an unbounded error space O(s) ordinary (i.e., fair-coin flipping) probabilistic Turing machine, and hence by space O(s) uniform classical (deterministic) circuits of depth O(s^2) and size 2^(O(s)). The quantum circuits are not required to operate with bounded error and may have depth exponential in s. (2) Any (unbounded error) quantum Turing machine running in space s, having arbitrary algebraic transition amplitudes, allowing unrestricted measurements during its computation, and having no restrictions on running time can be simulated by an unbounded error space O(s) ordinary probabilistic Turing machine, and hence deterministically in space O(s^2).<|reference_end|>
arxiv
@article{watrous1999on, title={On quantum and classical space-bounded processes with algebraic transition amplitudes}, author={John Watrous (University of Calgary)}, journal={arXiv preprint arXiv:cs/9911008}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911008}, primaryClass={cs.CC quant-ph} }
watrous1999on
arxiv-676477
cs/9911009
Two-way finite automata with quantum and classical states
<|reference_start|>Two-way finite automata with quantum and classical states: We introduce 2-way finite automata with quantum and classical states (2qcfa's). This is a variant on the 2-way quantum finite automata (2qfa) model which may be simpler to implement than unrestricted 2qfa's; the internal state of a 2qcfa may include a quantum part that may be in a (mixed) quantum state, but the tape head position is required to be classical. We show two languages for which 2qcfa's are better than classical 2-way automata. First, 2qcfa's can recognize palindromes, a language that cannot be recognized by 2-way deterministic or probabilistic finite automata. Second, in polynomial time 2qcfa's can recognize {a^n b^n | n>=0}, a language that can be recognized classically by a 2-way probabilistic automaton but only in exponential time.<|reference_end|>
arxiv
@article{ambainis1999two-way, title={Two-way finite automata with quantum and classical states}, author={Andris Ambainis (1), John Watrous (2) ((1) UC Berkeley, (2) University of Calgary)}, journal={arXiv preprint arXiv:cs/9911009}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911009}, primaryClass={cs.CC quant-ph} }
ambainis1999two-way
arxiv-676478
cs/9911010
The Sources of Certainty in Computation and Formal Systems
<|reference_start|>The Sources of Certainty in Computation and Formal Systems: In his Discourse on the Method of Rightly Conducting the Reason, and Seeking Truth in the Sciences, Rene Descartes sought ``clear and certain knowledge of all that is useful in life.'' Almost three centuries later, in ``The foundations of mathematics,'' David Hilbert tried to ``recast mathematical definitions and inferences in such a way that they are unshakable.'' Hilbert's program relied explicitly on formal systems (equivalently, computational systems) to provide certainty in mathematics. The concepts of computation and formal system were not defined in his time, but Descartes' method may be understood as seeking certainty in essentially the same way. In this article, I explain formal systems as concrete artifacts, and investigate the way in which they provide a high level of certainty---arguably the highest level achievable by rational discourse. The rich understanding of formal systems achieved by mathematical logic and computer science in this century illuminates the nature of programs, such as Descartes' and Hilbert's, that seek certainty through rigorous analysis.<|reference_end|>
arxiv
@article{o'donnell1999the, title={The Sources of Certainty in Computation and Formal Systems}, author={Michael J. O'Donnell}, journal={arXiv preprint arXiv:cs/9911010}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911010}, primaryClass={cs.OH} }
o'donnell1999the
arxiv-676479
cs/9911011
One-Level Prosodic Morphology
<|reference_start|>One-Level Prosodic Morphology: Recent developments in theoretical linguistics have lead to a widespread acceptance of constraint-based analyses of prosodic morphology phenomena such as truncation, infixation, floating morphemes and reduplication. Of these, reduplication is particularly challenging for state-of-the-art computational morphology, since it involves copying of some part of a phonological string. In this paper I argue for certain extensions to the one-level model of phonology and morphology (Bird & Ellison 1994) to cover the computational aspects of prosodic morphology using finite-state methods. In a nutshell, enriched lexical representations provide additional automaton arcs to repeat or skip sounds and also to allow insertion of additional material. A kind of resource consciousness is introduced to control this additional freedom, distinguishing between producer and consumer arcs. The non-finite-state copying aspect of reduplication is mapped to automata intersection, itself a non-finite-state operation. Bounded local optimization prunes certain automaton arcs that fail to contribute to linguistic optimisation criteria. The paper then presents implemented case studies of Ulwa construct state infixation, German hypocoristic truncation and Tagalog over-applying reduplication that illustrate the expressive power of this approach, before its merits and limitations are discussed and possible extensions are sketched. I conclude that the one-level approach to prosodic morphology presents an attractive way of extending finite-state techniques to difficult phenomena that hitherto resisted elegant computational analyses.<|reference_end|>
arxiv
@article{walther1999one-level, title={One-Level Prosodic Morphology}, author={Markus Walther (University of Marburg)}, journal={arXiv preprint arXiv:cs/9911011}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911011}, primaryClass={cs.CL} }
walther1999one-level
arxiv-676480
cs/9911012
Cox's Theorem Revisited
<|reference_start|>Cox's Theorem Revisited: The assumptions needed to prove Cox's Theorem are discussed and examined. Various sets of assumptions under which a Cox-style theorem can be proved are provided, although all are rather strong and, arguably, not natural.<|reference_end|>
arxiv
@article{halpern1999cox's, title={Cox's Theorem Revisited}, author={Joseph Y. Halpern}, journal={Journal of AI Research, vol. 11, 1999, pp. 429-435}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911012}, primaryClass={cs.AI} }
halpern1999cox's
arxiv-676481
cs/9911013
PushPush is NP-hard in 3D
<|reference_start|>PushPush is NP-hard in 3D: We prove that a particular pushing-blocks puzzle is intractable in 3D. The puzzle, inspired by the game PushPush, consists of unit square blocks on an integer lattice. An agent may push blocks (but never pull them) in attempting to move between given start and goal positions. In the PushPush version, the agent can only push one block at a time, and moreover, each block, when pushed, slides the maximal extent of its free range. We prove this version is NP-hard in 3D by reduction from SAT. The corresponding problem in 2D remains open.<|reference_end|>
arxiv
@article{o'rourke1999pushpush, title={PushPush is NP-hard in 3D}, author={Joseph O'Rourke and The Smith Problem Solving Group}, journal={arXiv preprint arXiv:cs/9911013}, year={1999}, number={Smith Tech. Rep. 064, Nov. 1999}, archivePrefix={arXiv}, eprint={cs/9911013}, primaryClass={cs.CG cs.DM} }
o'rourke1999pushpush
arxiv-676482
cs/9911014
The Complexity of Poor Man's Logic
<|reference_start|>The Complexity of Poor Man's Logic: Motivated by description logics, we investigate what happens to the complexity of modal satisfiability problems if we only allow formulas built from literals, $\wedge$, $\Diamond$, and $\Box$. Previously, the only known result was that the complexity of the satisfiability problem for K dropped from PSPACE-complete to coNP-complete (Schmidt-Schauss and Smolka, 1991 and Donini et al., 1992). In this paper we show that not all modal logics behave like K. In particular, we show that the complexity of the satisfiability problem with respect to frames in which each world has at least one successor drops from PSPACE-complete to P, but that in contrast the satisfiability problem with respect to the class of frames in which each world has at most two successors remains PSPACE-complete. As a corollary of the latter result, we also solve the open problem from Donini et al.'s complexity classification of description logics (Donini et al., 1997). In the last section, we classify the complexity of the satisfiability problem for K for all other restrictions on the set of operators.<|reference_end|>
arxiv
@article{hemaspaandra1999the, title={The Complexity of Poor Man's Logic}, author={Edith Hemaspaandra}, journal={Journal of Logic and Computation, 11(4), 609--622, 2001. Extended abstract in STACS 2000}, year={1999}, archivePrefix={arXiv}, eprint={cs/9911014}, primaryClass={cs.LO cs.CC} }
hemaspaandra1999the
arxiv-676483
cs/9912001
The phase transition in random Horn satisfiability and its algorithmic implications
<|reference_start|>The phase transition in random Horn satisfiability and its algorithmic implications: Let c>0 be a constant, and $\Phi$ be a random Horn formula with n variables and $m=c\cdot 2^{n}$ clauses, chosen uniformly at random (with repetition) from the set of all nonempty Horn clauses in the given variables. By analyzing \PUR, a natural implementation of positive unit resolution, we show that $\lim_{n\goesto \infty} \PR ({$\Phi$ is satisfiable})= 1-F(e^{-c})$, where $F(x)=(1-x)(1-x^2)(1-x^4)(1-x^8)... $. Our method also yields as a byproduct an average-case analysis of this algorithm.<|reference_end|>
arxiv
@article{istrate1999the, title={The phase transition in random Horn satisfiability and its algorithmic implications}, author={Gabriel Istrate}, journal={arXiv preprint arXiv:cs/9912001}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912001}, primaryClass={cs.DS cs.CC} }
istrate1999the
arxiv-676484
cs/9912002
A Geometric Model for Information Retrieval Systems
<|reference_start|>A Geometric Model for Information Retrieval Systems: This decade has seen a great deal of progress in the development of information retrieval systems. Unfortunately, we still lack a systematic understanding of the behavior of the systems and their relationship with documents. In this paper we present a completely new approach towards the understanding of the information retrieval systems. Recently, it has been observed that retrieval systems in TREC 6 show some remarkable patterns in retrieving relevant documents. Based on the TREC 6 observations, we introduce a geometric linear model of information retrieval systems. We then apply the model to predict the number of relevant documents by the retrieval systems. The model is also scalable to a much larger data set. Although the model is developed based on the TREC 6 routing test data, I believe it can be readily applicable to other information retrieval systems. In Appendix, we explained a simple and efficient way of making a better system from the existing systems.<|reference_end|>
arxiv
@article{kim1999a, title={A Geometric Model for Information Retrieval Systems}, author={Myung Ho Kim}, journal={arXiv preprint arXiv:cs/9912002}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912002}, primaryClass={cs.IR cs.CC cs.DL} }
kim1999a
arxiv-676485
cs/9912003
Resolution of Indirect Anaphora in Japanese Sentences Using Examples 'X no Y (Y of X)'
<|reference_start|>Resolution of Indirect Anaphora in Japanese Sentences Using Examples 'X no Y (Y of X)': A noun phrase can indirectly refer to an entity that has already been mentioned. For example, ``I went into an old house last night. The roof was leaking badly and ...'' indicates that ``the roof'' is associated with `` an old house}'', which was mentioned in the previous sentence. This kind of reference (indirect anaphora) has not been studied well in natural language processing, but is important for coherence resolution, language understanding, and machine translation. In order to analyze indirect anaphora, we need a case frame dictionary for nouns that contains knowledge of the relationships between two nouns but no such dictionary presently exists. Therefore, we are forced to use examples of ``X no Y'' (Y of X) and a verb case frame dictionary instead. We tried estimating indirect anaphora using this information and obtained a recall rate of 63% and a precision rate of 68% on test sentences. This indicates that the information of ``X no Y'' is useful to a certain extent when we cannot make use of a noun case frame dictionary. We estimated the results that would be given by a noun case frame dictionary, and obtained recall and precision rates of 71% and 82% respectively. Finally, we proposed a way to construct a noun case frame dictionary by using examples of ``X no Y.''<|reference_end|>
arxiv
@article{murata1999resolution, title={Resolution of Indirect Anaphora in Japanese Sentences Using Examples 'X no Y (Y of X)'}, author={M. Murata, H. Isahara (CRL), M. Nagao (Kyoto University)}, journal={ACL'99 Workshop on 'Coreference and Its Applications', Maryland, USA, June 22, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912003}, primaryClass={cs.CL} }
murata1999resolution
arxiv-676486
cs/9912004
Pronoun Resolution in Japanese Sentences Using Surface Expressions and Examples
<|reference_start|>Pronoun Resolution in Japanese Sentences Using Surface Expressions and Examples: In this paper, we present a method of estimating referents of demonstrative pronouns, personal pronouns, and zero pronouns in Japanese sentences using examples, surface expressions, topics and foci. Unlike conventional work which was semantic markers for semantic constraints, we used examples for semantic constraints and showed in our experiments that examples are as useful as semantic markers. We also propose many new methods for estimating referents of pronouns. For example, we use the form ``X of Y'' for estimating referents of demonstrative adjectives. In addition to our new methods, we used many conventional methods. As a result, experiments using these methods obtained a precision rate of 87% in estimating referents of demonstrative pronouns, personal pronouns, and zero pronouns for training sentences, and obtained a precision rate of 78% for test sentences.<|reference_end|>
arxiv
@article{murata1999pronoun, title={Pronoun Resolution in Japanese Sentences Using Surface Expressions and Examples}, author={M. Murata, H. Isahara (CRL), M. Nagao (Kyoto University)}, journal={ACL'99 Workshop on 'Coreference and Its Applications', Maryland, USA, June 22, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912004}, primaryClass={cs.CL} }
murata1999pronoun
arxiv-676487
cs/9912005
An Estimate of Referent of Noun Phrases in Japanese Sentences
<|reference_start|>An Estimate of Referent of Noun Phrases in Japanese Sentences: In machine translation and man-machine dialogue, it is important to clarify referents of noun phrases. We present a method for determining the referents of noun phrases in Japanese sentences by using the referential properties, modifiers, and possessors of noun phrases. Since the Japanese language has no articles, it is difficult to decide whether a noun phrase has an antecedent or not. We had previously estimated the referential properties of noun phrases that correspond to articles by using clue words in the sentences. By using these referential properties, our system determined the referents of noun phrases in Japanese sentences. Furthermore we used the modifiers and possessors of noun phrases in determining the referents of noun phrases. As a result, on training sentences we obtained a precision rate of 82% and a recall rate of 85% in the determination of the referents of noun phrases that have antecedents. On test sentences, we obtained a precision rate of 79% and a recall rate of 77%.<|reference_end|>
arxiv
@article{murata1999an, title={An Estimate of Referent of Noun Phrases in Japanese Sentences}, author={M. Murata (CRL), M. Nagao (Kyoto University)}, journal={Coling-ACL '98, Montrial, Canada, August 10, 1998 p912-916}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912005}, primaryClass={cs.CL} }
murata1999an
arxiv-676488
cs/9912006
Resolution of Verb Ellipsis in Japanese Sentence using Surface Expressions and Examples
<|reference_start|>Resolution of Verb Ellipsis in Japanese Sentence using Surface Expressions and Examples: Verbs are sometimes omitted in Japanese sentences. It is necessary to recover omitted verbs for purposes of language understanding, machine translation, and conversational processing. This paper describes a practical way to recover omitted verbs by using surface expressions and examples. We experimented the resolution of verb ellipses by using this information, and obtained a recall rate of 73% and a precision rate of 66% on test sentences.<|reference_end|>
arxiv
@article{murata1999resolution, title={Resolution of Verb Ellipsis in Japanese Sentence using Surface Expressions and Examples}, author={M. Murata, M. Nagao (Kyoto University)}, journal={Natural Language Processing Pacific Rim Symposium 1997 (NLPRS'97), Cape Panwa Hotel, Phuket, Thailand, December 2-4, 1997 p75-80}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912006}, primaryClass={cs.CL} }
murata1999resolution
arxiv-676489
cs/9912007
An Example-Based Approach to Japanese-to-English Translation of Tense, Aspect, and Modality
<|reference_start|>An Example-Based Approach to Japanese-to-English Translation of Tense, Aspect, and Modality: We have developed a new method for Japanese-to-English translation of tense, aspect, and modality that uses an example-based method. In this method the similarity between input and example sentences is defined as the degree of semantic matching between the expressions at the ends of the sentences. Our method also uses the k-nearest neighbor method in order to exclude the effects of noise; for example, wrongly tagged data in the bilingual corpora. Experiments show that our method can translate tenses, aspects, and modalities more accurately than the top-level MT software currently available on the market can. Moreover, it does not require hand-craft rules.<|reference_end|>
arxiv
@article{murata1999an, title={An Example-Based Approach to Japanese-to-English Translation of Tense, Aspect, and Modality}, author={M. Murata, Q. Ma, K. Uchimoto, H. Isahara (CRL)}, journal={TMI'99, Chester, UK, August 23, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912007}, primaryClass={cs.CL} }
murata1999an
arxiv-676490
cs/9912008
New Error Bounds for Solomonoff Prediction
<|reference_start|>New Error Bounds for Solomonoff Prediction: Solomonoff sequence prediction is a scheme to predict digits of binary strings without knowing the underlying probability distribution. We call a prediction scheme informed when it knows the true probability distribution of the sequence. Several new relations between universal Solomonoff sequence prediction and informed prediction and general probabilistic prediction schemes will be proved. Among others, they show that the number of errors in Solomonoff prediction is finite for computable distributions, if finite in the informed case. Deterministic variants will also be studied. The most interesting result is that the deterministic variant of Solomonoff prediction is optimal compared to any other probabilistic or deterministic prediction scheme apart from additive square root corrections only. This makes it well suited even for difficult prediction problems, where it does not suffice when the number of errors is minimal to within some factor greater than one. Solomonoff's original bound and the ones presented here complement each other in a useful way.<|reference_end|>
arxiv
@article{hutter1999new, title={New Error Bounds for Solomonoff Prediction}, author={Marcus Hutter}, journal={J. Computer and System Science 62:4 (2001) 653-667}, year={1999}, number={IDSIA-11-00}, archivePrefix={arXiv}, eprint={cs/9912008}, primaryClass={cs.AI cs.LG} }
hutter1999new
arxiv-676491
cs/9912009
Deduction over Mixed-Level Logic Representations for Text Passage Retrieval
<|reference_start|>Deduction over Mixed-Level Logic Representations for Text Passage Retrieval: A system is described that uses a mixed-level representation of (part of) meaning of natural language documents (based on standard Horn Clause Logic) and a variable-depth search strategy that distinguishes between the different levels of abstraction in the knowledge representation to locate specific passages in the documents. Mixed-level representations as well as variable-depth search strategies are applicable in fields outside that of NLP.<|reference_end|>
arxiv
@article{hess1999deduction, title={Deduction over Mixed-Level Logic Representations for Text Passage Retrieval}, author={Michael Hess}, journal={IEEE Computer Society Press, 1996. 383-390}, year={1999}, doi={10.1109/TAI.1996.560480}, archivePrefix={arXiv}, eprint={cs/9912009}, primaryClass={cs.CL} }
hess1999deduction
arxiv-676492
cs/9912010
Scalability Terminology: Farms, Clones, Partitions, Packs, RACS and RAPS
<|reference_start|>Scalability Terminology: Farms, Clones, Partitions, Packs, RACS and RAPS: Defines a vocabulary for scaleable systems: Geoplexes, Farms, Clones, RACS, RAPS, clones, partitions, and packs and dicusses the design tradeoffs of using clones, partitons, and packs.<|reference_end|>
arxiv
@article{devlin1999scalability, title={Scalability Terminology: Farms, Clones, Partitions, Packs, RACS and RAPS}, author={Bill Devlin, Jim Gray, Bill Laing, George Spix}, journal={arXiv preprint arXiv:cs/9912010}, year={1999}, number={MS TR 99 85}, archivePrefix={arXiv}, eprint={cs/9912010}, primaryClass={cs.AR cs.DC} }
devlin1999scalability
arxiv-676493
cs/9912011
Adaptivity in Agent-Based Routing for Data Networks
<|reference_start|>Adaptivity in Agent-Based Routing for Data Networks: Adaptivity, both of the individual agents and of the interaction structure among the agents, seems indispensable for scaling up multi-agent systems (MAS's) in noisy environments. One important consideration in designing adaptive agents is choosing their action spaces to be as amenable as possible to machine learning techniques, especially to reinforcement learning (RL) techniques. One important way to have the interaction structure connecting agents itself be adaptive is to have the intentions and/or actions of the agents be in the input spaces of the other agents, much as in Stackelberg games. We consider both kinds of adaptivity in the design of a MAS to control network packet routing. We demonstrate on the OPNET event-driven network simulator the perhaps surprising fact that simply changing the action space of the agents to be better suited to RL can result in very large improvements in their potential performance: at their best settings, our learning-amenable router agents achieve throughputs up to three and one half times better than that of the standard Bellman-Ford routing algorithm, even when the Bellman-Ford protocol traffic is maintained. We then demonstrate that much of that potential improvement can be realized by having the agents learn their settings when the agent interaction structure is itself adaptive.<|reference_end|>
arxiv
@article{wolpert1999adaptivity, title={Adaptivity in Agent-Based Routing for Data Networks}, author={David H. Wolpert, Sergey Kirshner, Chris J. Merz and Kagan Tumer}, journal={arXiv preprint arXiv:cs/9912011}, year={1999}, number={NASA-ARC-IC-99-122}, archivePrefix={arXiv}, eprint={cs/9912011}, primaryClass={cs.MA adap-org cs.NI nlin.AO} }
wolpert1999adaptivity
arxiv-676494
cs/9912012
Avoiding Braess' Paradox through Collective Intelligence
<|reference_start|>Avoiding Braess' Paradox through Collective Intelligence: In an Ideal Shortest Path Algorithm (ISPA), at each moment each router in a network sends all of its traffic down the path that will incur the lowest cost to that traffic. In the limit of an infinitesimally small amount of traffic for a particular router, its routing that traffic via an ISPA is optimal, as far as cost incurred by that traffic is concerned. We demonstrate though that in many cases, due to the side-effects of one router's actions on another routers performance, having routers use ISPA's is suboptimal as far as global aggregate cost is concerned, even when only used to route infinitesimally small amounts of traffic. As a particular example of this we present an instance of Braess' paradox for ISPA's, in which adding new links to a network decreases overall throughput. We also demonstrate that load-balancing, in which the routing decisions are made to optimize the global cost incurred by all traffic currently being routed, is suboptimal as far as global cost averaged across time is concerned. This is also due to "side-effects", in this case of current routing decision on future traffic. The theory of COllective INtelligence (COIN) is concerned precisely with the issue of avoiding such deleterious side-effects. We present key concepts from that theory and use them to derive an idealized algorithm whose performance is better than that of the ISPA, even in the infinitesimal limit. We present experiments verifying this, and also showing that a machine-learning-based version of this COIN algorithm in which costs are only imprecisely estimated (a version potentially applicable in the real world) also outperforms the ISPA, despite having access to less information than does the ISPA. In particular, this COIN algorithm avoids Braess' paradox.<|reference_end|>
arxiv
@article{tumer1999avoiding, title={Avoiding Braess' Paradox through Collective Intelligence}, author={Kagan Tumer and David H. Wolpert}, journal={arXiv preprint arXiv:cs/9912012}, year={1999}, number={NASA-ARC-IC-99-124}, archivePrefix={arXiv}, eprint={cs/9912012}, primaryClass={cs.DC adap-org cs.MA cs.NI nlin.AO} }
tumer1999avoiding
arxiv-676495
cs/9912013
Multivariate Regression Depth
<|reference_start|>Multivariate Regression Depth: The regression depth of a hyperplane with respect to a set of n points in R^d is the minimum number of points the hyperplane must pass through in a rotation to vertical. We generalize hyperplane regression depth to k-flats for any k between 0 and d-1. The k=0 case gives the classical notion of center points. We prove that for any k and d, deep k-flats exist, that is, for any set of n points there always exists a k-flat with depth at least a constant fraction of n. As a consequence, we derive a linear-time (1+epsilon)-approximation algorithm for the deepest flat.<|reference_end|>
arxiv
@article{bern1999multivariate, title={Multivariate Regression Depth}, author={Marshall Bern and David Eppstein}, journal={Discrete Comput. Geom. 28(1):1-17, July 2002}, year={1999}, doi={10.1007/s00454-001-0092-1}, archivePrefix={arXiv}, eprint={cs/9912013}, primaryClass={cs.CG math.CO} }
bern1999multivariate
arxiv-676496
cs/9912014
Fast Hierarchical Clustering and Other Applications of Dynamic Closest Pairs
<|reference_start|>Fast Hierarchical Clustering and Other Applications of Dynamic Closest Pairs: We develop data structures for dynamic closest pair problems with arbitrary distance functions, that do not necessarily come from any geometric structure on the objects. Based on a technique previously used by the author for Euclidean closest pairs, we show how to insert and delete objects from an n-object set, maintaining the closest pair, in O(n log^2 n) time per update and O(n) space. With quadratic space, we can instead use a quadtree-like structure to achieve an optimal time bound, O(n) per update. We apply these data structures to hierarchical clustering, greedy matching, and TSP heuristics, and discuss other potential applications in machine learning, Groebner bases, and local improvement algorithms for partition and placement problems. Experiments show our new methods to be faster in practice than previously used heuristics.<|reference_end|>
arxiv
@article{eppstein1999fast, title={Fast Hierarchical Clustering and Other Applications of Dynamic Closest Pairs}, author={David Eppstein}, journal={J. Experimental Algorithmics 5(1):1-23, 2000}, year={1999}, doi={10.1145/351827.351829}, archivePrefix={arXiv}, eprint={cs/9912014}, primaryClass={cs.DS} }
eppstein1999fast
arxiv-676497
cs/9912015
Comparative Analysis of Five XML Query Languages
<|reference_start|>Comparative Analysis of Five XML Query Languages: XML is becoming the most relevant new standard for data representation and exchange on the WWW. Novel languages for extracting and restructuring the XML content have been proposed, some in the tradition of database query languages (i.e. SQL, OQL), others more closely inspired by XML. No standard for XML query language has yet been decided, but the discussion is ongoing within the World Wide Web Consortium and within many academic institutions and Internet-related major companies. We present a comparison of five, representative query languages for XML, highlighting their common features and differences.<|reference_end|>
arxiv
@article{bonifati1999comparative, title={Comparative Analysis of Five XML Query Languages}, author={Angela Bonifati, Stefano Ceri}, journal={arXiv preprint arXiv:cs/9912015}, year={1999}, number={Dipartimento di Elettronica e Informazione, Politecnico di Milano (Italy) Technical Report nr.99-76}, archivePrefix={arXiv}, eprint={cs/9912015}, primaryClass={cs.DB} }
bonifati1999comparative
arxiv-676498
cs/9912016
HMM Specialization with Selective Lexicalization
<|reference_start|>HMM Specialization with Selective Lexicalization: We present a technique which complements Hidden Markov Models by incorporating some lexicalized states representing syntactically uncommon words. Our approach examines the distribution of transitions, selects the uncommon words, and makes lexicalized states for the words. We performed a part-of-speech tagging experiment on the Brown corpus to evaluate the resultant language model and discovered that this technique improved the tagging accuracy by 0.21% at the 95% level of confidence.<|reference_end|>
arxiv
@article{kim1999hmm, title={HMM Specialization with Selective Lexicalization}, author={Jin-Dong Kim and Sang-Zoo Lee and Hae-Chang Rim}, journal={Proceedings of the 1999 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pp.121-127, 1999}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912016}, primaryClass={cs.CL cs.LG} }
kim1999hmm
arxiv-676499
cs/9912017
Mixed-Level Knowledge Representation and Variable-Depth Inference in Natural Language Processing
<|reference_start|>Mixed-Level Knowledge Representation and Variable-Depth Inference in Natural Language Processing: A system is described that uses a mixed-level knowledge representation based on standard Horn Clause Logic to represent (part of) the meaning of natural language documents. A variable-depth search strategy is outlined that distinguishes between the different levels of abstraction in the knowledge representation to locate specific passages in the documents. A detailed description of the linguistic aspects of the system is given. Mixed-level representations as well as variable-depth search strategies are applicable in fields outside that of NLP.<|reference_end|>
arxiv
@article{hess1999mixed-level, title={Mixed-Level Knowledge Representation and Variable-Depth Inference in Natural Language Processing}, author={Michael Hess}, journal={International Journal on Artificial Intelligence Tools (IJAIT), vol 6, no 4, 1997. 481-509}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912017}, primaryClass={cs.CL} }
hess1999mixed-level
arxiv-676500
cs/9912018
Computation in an algebra of test selection criteria
<|reference_start|>Computation in an algebra of test selection criteria: One of the key concepts in testing is that of adequate test sets. A test selection criterion decides which test sets are adequate. In this paper, a language schema for specifying a large class of test selection criteria is developed; the schema is based on two operations for building complex criteria from simple ones. Basic algebraic properties of the two operations are derived. In the second part of the paper, a simple language-an instance of the general schema-is studied in detail, with the goal of generating small adequate test sets automatically. It is shown that one version of the problem is intractable, while another is solvable by an efficient algorithm. An implementation of the algorithm is described.<|reference_end|>
arxiv
@article{pachl1999computation, title={Computation in an algebra of test selection criteria}, author={Jan Pachl, Shmuel Zaks}, journal={arXiv preprint arXiv:cs/9912018}, year={1999}, archivePrefix={arXiv}, eprint={cs/9912018}, primaryClass={cs.SE} }
pachl1999computation