corpus_id
stringlengths
7
12
paper_id
stringlengths
9
16
title
stringlengths
1
261
abstract
stringlengths
70
4.02k
source
stringclasses
1 value
bibtex
stringlengths
208
20.9k
citation_key
stringlengths
6
100
arxiv-669401
cs/0003001
Making news understandable to computers
<|reference_start|>Making news understandable to computers: Computers and devices are largely unaware of events taking place in the world. This could be changed if news were made available in a computer-understandable form. In this paper we present XML documents called NewsForms that represent the key points of 17 types of news events. We discuss the benefits of computer-understandable news and present the NewsExtract program for converting text news stories into NewsForms.<|reference_end|>
arxiv
@article{mueller2000making, title={Making news understandable to computers}, author={Erik T. Mueller}, journal={arXiv preprint arXiv:cs/0003001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003001}, primaryClass={cs.IR} }
mueller2000making
arxiv-669402
cs/0003002
Combining Random Number Generators using Quasicrystals
<|reference_start|>Combining Random Number Generators using Quasicrystals: This paper has been withdrawn by the author(s),<|reference_end|>
arxiv
@article{guimond2000combining, title={Combining Random Number Generators using Quasicrystals}, author={Louis-Sebastien Guimond, Zuzana Masakova, Jiri Patera and Edita Pelantova}, journal={arXiv preprint arXiv:cs/0003002}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003002}, primaryClass={cs.DM} }
guimond2000combining
arxiv-669403
cs/0003003
Prospects for in-depth story understanding by computer
<|reference_start|>Prospects for in-depth story understanding by computer: While much research on the hard problem of in-depth story understanding by computer was performed starting in the 1970s, interest shifted in the 1990s to information extraction and word sense disambiguation. Now that a degree of success has been achieved on these easier problems, I propose it is time to return to in-depth story understanding. In this paper I examine the shift away from story understanding, discuss some of the major problems in building a story understanding system, present some possible solutions involving a set of interacting understanding agents, and provide pointers to useful tools and resources for building story understanding systems.<|reference_end|>
arxiv
@article{mueller2000prospects, title={Prospects for in-depth story understanding by computer}, author={Erik T. Mueller}, journal={arXiv preprint arXiv:cs/0003003}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003003}, primaryClass={cs.AI cs.CL} }
mueller2000prospects
arxiv-669404
cs/0003004
A database and lexicon of scripts for ThoughtTreasure
<|reference_start|>A database and lexicon of scripts for ThoughtTreasure: Since scripts were proposed in the 1970's as an inferencing mechanism for AI and natural language processing programs, there have been few attempts to build a database of scripts. This paper describes a database and lexicon of scripts that has been added to the ThoughtTreasure commonsense platform. The database provides the following information about scripts: sequence of events, roles, props, entry conditions, results, goals, emotions, places, duration, frequency, and cost. English and French words and phrases are linked to script concepts.<|reference_end|>
arxiv
@article{mueller2000a, title={A database and lexicon of scripts for ThoughtTreasure}, author={Erik T. Mueller}, journal={arXiv preprint arXiv:cs/0003004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003004}, primaryClass={cs.AI cs.CL} }
mueller2000a
arxiv-669405
cs/0003005
Don't Trash your Intermediate Results, Cache 'em
<|reference_start|>Don't Trash your Intermediate Results, Cache 'em: In data warehouse and data mart systems, queries often take a long time to execute due to their complex nature. Query response times can be greatly improved by caching final/intermediate results of previous queries, and using them to answer later queries. In this paper we describe a caching system called Exchequer which incorporates several novel features including optimization aware cache maintenance and the use of a cache aware optimizer. In contrast, in existing work, the module that makes cost-benefit decisions is part of the cache manager and works independent of the optimizer which essentially reconsiders these decisions while finding the best plan for a query. In our work, the optimizer takes the decisions for the cache manager. Furthermore, existing approaches are either restricted to cube (slice/point) queries, or cache just the query results. On the other hand, our work is extens ible and in fact presents a data-model independent framework and algorithm. Our experimental results attest to the efficacy of our cache management techniques and show that over a wide range of parameters (a) Exchequer's query response times are lower by more than 30% compared to the best performing competitor, and (b) Exchequer can deliver the same response time as its competitor with just one tenth of the cache size.<|reference_end|>
arxiv
@article{roy2000don't, title={Don't Trash your Intermediate Results, Cache 'em}, author={Prasan Roy, Krithi Ramamritham, S. Seshadri, Pradeep Shenoy, S. Sudarshan}, journal={arXiv preprint arXiv:cs/0003005}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003005}, primaryClass={cs.DB} }
roy2000don't
arxiv-669406
cs/0003006
Materialized View Selection and Maintenance Using Multi-Query Optimization
<|reference_start|>Materialized View Selection and Maintenance Using Multi-Query Optimization: Because the presence of views enhances query performance, materialized views are increasingly being supported by commercial database/data warehouse systems. Whenever the data warehouse is updated, the materialized views must also be updated. However, whereas the amount of data entering a warehouse, the query loads, and the need to obtain up-to-date responses are all increasing, the time window available for making the warehouse up-to-date is shrinking. These trends necessitate efficient techniques for the maintenance of materialized views. In this paper, we show how to find an efficient plan for maintenance of a {\em set} of views, by exploiting common subexpressions between different view maintenance expressions. These common subexpressions may be materialized temporarily during view maintenance. Our algorithms also choose subexpressions/indices to be materialized permanently (and maintained along with other materialized views), to speed up view maintenance. While there has been much work on view maintenance in the past, our novel contributions lie in exploiting a recently developed framework for multiquery optimization to efficiently find good view maintenance plans as above. In addition to faster view maintenance, our algorithms can also be used to efficiently select materialized views to speed up workloads containing queries.<|reference_end|>
arxiv
@article{mistry2000materialized, title={Materialized View Selection and Maintenance Using Multi-Query Optimization}, author={Hoshi Mistry, Prasan Roy, Krithi Ramamritham, S. Sudarshan}, journal={arXiv preprint arXiv:cs/0003006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003006}, primaryClass={cs.DB} }
mistry2000materialized
arxiv-669407
cs/0003007
Computing Circumscriptive Databases by Integer Programming: Revisited (Extended Abstract)
<|reference_start|>Computing Circumscriptive Databases by Integer Programming: Revisited (Extended Abstract): In this paper, we consider a method of computing minimal models in circumscription using integer programming in propositional logic and first-order logic with domain closure axioms and unique name axioms. This kind of treatment is very important since this enable to apply various technique developed in operations research to nonmonotonic reasoning. Nerode et al. (1995) are the first to propose a method of computing circumscription using integer programming. They claimed their method was correct for circumscription with fixed predicate, but we show that their method does not correctly reflect their claim. We show a correct method of computing all the minimal models not only with fixed predicates but also with varied predicates and we extend our method to compute prioritized circumscription as well.<|reference_end|>
arxiv
@article{satoh2000computing, title={Computing Circumscriptive Databases by Integer Programming: Revisited (Extended Abstract)}, author={Ken Satoh, Hidenori Okamoto}, journal={arXiv preprint arXiv:cs/0003007}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003007}, primaryClass={cs.AI cs.LO} }
satoh2000computing
arxiv-669408
cs/0003008
Consistency Management of Normal Logic Program by Top-down Abductive Proof Procedure
<|reference_start|>Consistency Management of Normal Logic Program by Top-down Abductive Proof Procedure: This paper presents a method of computing a revision of a function-free normal logic program. If an added rule is inconsistent with a program, that is, if it leads to a situation such that no stable model exists for a new program, then deletion and addition of rules are performed to avoid inconsistency. We specify a revision by translating a normal logic program into an abductive logic program with abducibles to represent deletion and addition of rules. To compute such deletion and addition, we propose an adaptation of our top-down abductive proof procedure to compute a relevant abducibles to an added rule. We compute a minimally revised program, by choosing a minimal set of abducibles among all the sets of abducibles computed by a top-down proof procedure.<|reference_end|>
arxiv
@article{satoh2000consistency, title={Consistency Management of Normal Logic Program by Top-down Abductive Proof Procedure}, author={Ken Satoh}, journal={arXiv preprint arXiv:cs/0003008}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003008}, primaryClass={cs.AI} }
satoh2000consistency
arxiv-669409
cs/0003009
Conditional indifference and conditional preservation
<|reference_start|>Conditional indifference and conditional preservation: The idea of preserving conditional beliefs emerged recently as a new paradigm apt to guide the revision of epistemic states. Conditionals are substantially different from propositional beliefs and need specific treatment. In this paper, we present a new approach to conditionals, capturing particularly well their dynamic part as revision policies. We thoroughly axiomatize a principle of conditional preservation as an indifference property with respect to conditional structures of worlds. This principle is developed in a semi-quantitative setting, so as to reveal its fundamental meaning for belief revision in quantitative as well as in qualitative frameworks. In fact, it is shown to cover other proposed approaches to conditional preservation.<|reference_end|>
arxiv
@article{kern-isberner2000conditional, title={Conditional indifference and conditional preservation}, author={Gabriele Kern-Isberner}, journal={arXiv preprint arXiv:cs/0003009}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003009}, primaryClass={cs.AI cs.LO} }
kern-isberner2000conditional
arxiv-669410
cs/0003010
TSIA: A Dataflow Model
<|reference_start|>TSIA: A Dataflow Model: The Task System and Item Architecture (TSIA) is a model for transparent application execution. In many real-world projects, a TSIA provides a simple application with a transparent reliable, distributed, heterogeneous, adaptive, dynamic, real-time, parallel, secure or other execution. TSIA is suitable for many applications, not just for the simple applications served to date. This presentation shows that TSIA is a dataflow model - a long-standing model for transparent parallel execution. The advances to the dataflow model include a simple semantics, as well as support for input/output, for modifiable items and for other such effects.<|reference_end|>
arxiv
@article{steinmacher-burow2000tsia:, title={TSIA: A Dataflow Model}, author={Burkhard D. Steinmacher-Burow}, journal={arXiv preprint arXiv:cs/0003010}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003010}, primaryClass={cs.PL} }
steinmacher-burow2000tsia:
arxiv-669411
cs/0003011
Automatic Belief Revision in SNePS
<|reference_start|>Automatic Belief Revision in SNePS: SNePS is a logic- and network- based knowledge representation, reasoning, and acting system, based on a monotonic, paraconsistent, first-order term logic, with compositional intensional semantics. It has an ATMS-style facility for belief contraction, and an acting component, including a well-defined syntax and semantics for primitive and composite acts, as well as for ``rules'' that allow for acting in support of reasoning and reasoning in support of acting. SNePS has been designed to support natural language competent cognitive agents. When the current version of SNePS detects an explicit contradiction, it interacts with the user, providing information that helps the user decide what to remove from the knowledge base in order to remove the contradiction. The forthcoming SNePS 2.6 will also do automatic belief contraction if the information in the knowledge base warrents it.<|reference_end|>
arxiv
@article{shapiro2000automatic, title={Automatic Belief Revision in SNePS}, author={Stuart C. Shapiro and Frances L. Johnson}, journal={arXiv preprint arXiv:cs/0003011}, year={2000}, number={2000-01}, archivePrefix={arXiv}, eprint={cs/0003011}, primaryClass={cs.AI cs.LO} }
shapiro2000automatic
arxiv-669412
cs/0003012
Defeasible Reasoning in OSCAR
<|reference_start|>Defeasible Reasoning in OSCAR: This is a system description for the OSCAR defeasible reasoner.<|reference_end|>
arxiv
@article{pollock2000defeasible, title={Defeasible Reasoning in OSCAR}, author={John L. Pollock}, journal={arXiv preprint arXiv:cs/0003012}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003012}, primaryClass={cs.AI} }
pollock2000defeasible
arxiv-669413
cs/0003013
A flexible framework for defeasible logics
<|reference_start|>A flexible framework for defeasible logics: Logics for knowledge representation suffer from over-specialization: while each logic may provide an ideal representation formalism for some problems, it is less than optimal for others. A solution to this problem is to choose from several logics and, when necessary, combine the representations. In general, such an approach results in a very difficult problem of combination. However, if we can choose the logics from a uniform framework then the problem of combining them is greatly simplified. In this paper, we develop such a framework for defeasible logics. It supports all defeasible logics that satisfy a strong negation principle. We use logic meta-programs as the basis for the framework.<|reference_end|>
arxiv
@article{antoniou2000a, title={A flexible framework for defeasible logics}, author={G. Antoniou, D. Billigton, G. Governatori, M.J. Maher}, journal={arXiv preprint arXiv:cs/0003013}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003013}, primaryClass={cs.AI cs.LO} }
antoniou2000a
arxiv-669414
cs/0003014
Applying Maxi-adjustment to Adaptive Information Filtering Agents
<|reference_start|>Applying Maxi-adjustment to Adaptive Information Filtering Agents: Learning and adaptation is a fundamental property of intelligent agents. In the context of adaptive information filtering, a filtering agent's beliefs about a user's information needs have to be revised regularly with reference to the user's most current information preferences. This learning and adaptation process is essential for maintaining the agent's filtering performance. The AGM belief revision paradigm provides a rigorous foundation for modelling rational and minimal changes to an agent's beliefs. In particular, the maxi-adjustment method, which follows the AGM rationale of belief change, offers a sound and robust computational mechanism to develop adaptive agents so that learning autonomy of these agents can be enhanced. This paper describes how the maxi-adjustment method is applied to develop the learning components of adaptive information filtering agents, and discusses possible difficulties of applying such a framework to these agents.<|reference_end|>
arxiv
@article{lau2000applying, title={Applying Maxi-adjustment to Adaptive Information Filtering Agents}, author={Raymond Lau (1), Arthur H.M. ter Hofstede (1), Peter D. Bruza (2) ((1) Queensland University of Technology, (2) Distributed Systems Technology Centre)}, journal={arXiv preprint arXiv:cs/0003014}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003014}, primaryClass={cs.AI cs.MA} }
lau2000applying
arxiv-669415
cs/0003015
On the semantics of merging
<|reference_start|>On the semantics of merging: Intelligent agents are often faced with the problem of trying to merge possibly conflicting pieces of information obtained from different sources into a consistent view of the world. We propose a framework for the modelling of such merging operations with roots in the work of Spohn (1988, 1991). Unlike most approaches we focus on the merging of epistemic states, not knowledge bases. We construct a number of plausible merging operations and measure them against various properties that merging operations ought to satisfy. Finally, we discuss the connection between merging and the use of infobases Meyer (1999) and Meyer et al. (2000).<|reference_end|>
arxiv
@article{meyer2000on, title={On the semantics of merging}, author={Thomas Meyer}, journal={arXiv preprint arXiv:cs/0003015}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003015}, primaryClass={cs.AI cs.LO} }
meyer2000on
arxiv-669416
cs/0003016
Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective
<|reference_start|>Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective: Diagnostic reasoning has been characterized logically as consistency-based reasoning or abductive reasoning. Previous analyses in the literature have shown, on the one hand, that choosing the (in general more restrictive) abductive definition may be appropriate or not, depending on the content of the knowledge base [Console&Torasso91], and, on the other hand, that, depending on the choice of the definition the same knowledge should be expressed in different form [Poole94]. Since in Model-Based Diagnosis a major problem is finding the right way of abstracting the behavior of the system to be modeled, this paper discusses the relation between modeling, and in particular abstraction in the model, and the notion of diagnosis.<|reference_end|>
arxiv
@article{dupre'2000abductive, title={Abductive and Consistency-Based Diagnosis Revisited: a Modeling Perspective}, author={Daniele Theseider Dupre' (Dipartimento di Scienze e Tecnologie Avanzate - Universita' del Piemonte Orientale, Alessandria, Italy)}, journal={arXiv preprint arXiv:cs/0003016}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003016}, primaryClass={cs.AI} }
dupre'2000abductive
arxiv-669417
cs/0003017
The lexicographic closure as a revision process
<|reference_start|>The lexicographic closure as a revision process: The connections between nonmonotonic reasoning and belief revision are well-known. A central problem in the area of nonmonotonic reasoning is the problem of default entailment, i.e., when should an item of default information representing "if A is true then, normally, B is true" be said to follow from a given set of items of such information. Many answers to this question have been proposed but, surprisingly, virtually none have attempted any explicit connection to belief revision. The aim of this paper is to give an example of how such a connection can be made by showing how the lexicographic closure of a set of defaults may be conceptualised as a process of iterated revision by sets of sentences. Specifically we use the revision process of Nayak.<|reference_end|>
arxiv
@article{booth2000the, title={The lexicographic closure as a revision process}, author={Richard Booth}, journal={arXiv preprint arXiv:cs/0003017}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003017}, primaryClass={cs.AI cs.LO} }
booth2000the
arxiv-669418
cs/0003018
Description of GADEL
<|reference_start|>Description of GADEL: This article describes the first implementation of the GADEL system : a Genetic Algorithm for Default Logic. The goal of GADEL is to compute extensions in Reiter's default logic. It accepts every kind of finite propositional default theories and is based on evolutionary principles of Genetic Algorithms. Its first experimental results on certain instances of the problem show that this new approach of the problem can be successful.<|reference_end|>
arxiv
@article{stephan2000description, title={Description of GADEL}, author={I. Stephan, F. Saubion, P. Nicolas (University of Angers, France)}, journal={arXiv preprint arXiv:cs/0003018}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003018}, primaryClass={cs.AI cs.LO} }
stephan2000description
arxiv-669419
cs/0003019
Extending Classical Logic with Inductive Definitions
<|reference_start|>Extending Classical Logic with Inductive Definitions: The goal of this paper is to extend classical logic with a generalized notion of inductive definition supporting positive and negative induction, to investigate the properties of this logic, its relationships to other logics in the area of non-monotonic reasoning, logic programming and deductive databases, and to show its application for knowledge representation by giving a typology of definitional knowledge.<|reference_end|>
arxiv
@article{denecker2000extending, title={Extending Classical Logic with Inductive Definitions}, author={Marc Denecker}, journal={arXiv preprint arXiv:cs/0003019}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003019}, primaryClass={cs.LO cs.AI} }
denecker2000extending
arxiv-669420
cs/0003020
ACLP: Integrating Abduction and Constraint Solving
<|reference_start|>ACLP: Integrating Abduction and Constraint Solving: ACLP is a system which combines abductive reasoning and constraint solving by integrating the frameworks of Abductive Logic Programming (ALP) and Constraint Logic Programming (CLP). It forms a general high-level knowledge representation environment for abductive problems in Artificial Intelligence and other areas. In ACLP, the task of abduction is supported and enhanced by its non-trivial integration with constraint solving facilitating its application to complex problems. The ACLP system is currently implemented on top of the CLP language of ECLiPSe as a meta-interpreter exploiting its underlying constraint solver for finite domains. It has been applied to the problems of planning and scheduling in order to test its computational effectiveness compared with the direct use of the (lower level) constraint solving framework of CLP on which it is built. These experiments provide evidence that the abductive framework of ACLP does not compromise significantly the computational efficiency of the solutions. Other experiments show the natural ability of ACLP to accommodate easily and in a robust way new or changing requirements of the original problem.<|reference_end|>
arxiv
@article{kakas2000aclp:, title={ACLP: Integrating Abduction and Constraint Solving}, author={Antonis Kakas}, journal={arXiv preprint arXiv:cs/0003020}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003020}, primaryClass={cs.AI} }
kakas2000aclp:
arxiv-669421
cs/0003021
Relevance Sensitive Non-Monotonic Inference on Belief Sequences
<|reference_start|>Relevance Sensitive Non-Monotonic Inference on Belief Sequences: We present a method for relevance sensitive non-monotonic inference from belief sequences which incorporates insights pertaining to prioritized inference and relevance sensitive, inconsistency tolerant belief revision. Our model uses a finite, logically open sequence of propositional formulas as a representation for beliefs and defines a notion of inference from maxiconsistent subsets of formulas guided by two orderings: a temporal sequencing and an ordering based on relevance relations between the conclusion and formulas in the sequence. The relevance relations are ternary (using context as a parameter) as opposed to standard binary axiomatizations. The inference operation thus defined easily handles iterated revision by maintaining a revision history, blocks the derivation of inconsistent answers from a possibly inconsistent sequence and maintains the distinction between explicit and implicit beliefs. In doing so, it provides a finitely presented formalism and a plausible model of reasoning for automated agents.<|reference_end|>
arxiv
@article{chopra2000relevance, title={Relevance Sensitive Non-Monotonic Inference on Belief Sequences}, author={Samir Chopra, Konstantinos Georgatos, Rohit Parikh}, journal={arXiv preprint arXiv:cs/0003021}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003021}, primaryClass={cs.AI} }
chopra2000relevance
arxiv-669422
cs/0003022
Hypothetical revision and matter-of-fact supposition
<|reference_start|>Hypothetical revision and matter-of-fact supposition: The paper studies the notion of supposition encoded in non-Archimedean conditional probability (and revealed in the acceptance of the so-called indicative conditionals). The notion of qualitative change of view that thus arises is axiomatized and compared with standard notions like AGM and UPDATE. Applications in the following fields are discussed: (1) theory of games and decisions, (2) causal models, (3) non-monotonic logic.<|reference_end|>
arxiv
@article{arlo-costa2000hypothetical, title={Hypothetical revision and matter-of-fact supposition}, author={Horacio Arlo-Costa}, journal={arXiv preprint arXiv:cs/0003022}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003022}, primaryClass={cs.AI cs.CL} }
arlo-costa2000hypothetical
arxiv-669423
cs/0003023
Probabilistic Default Reasoning with Conditional Constraints
<|reference_start|>Probabilistic Default Reasoning with Conditional Constraints: We propose a combination of probabilistic reasoning from conditional constraints with approaches to default reasoning from conditional knowledge bases. In detail, we generalize the notions of Pearl's entailment in system Z, Lehmann's lexicographic entailment, and Geffner's conditional entailment to conditional constraints. We give some examples that show that the new notions of z-, lexicographic, and conditional entailment have similar properties like their classical counterparts. Moreover, we show that the new notions of z-, lexicographic, and conditional entailment are proper generalizations of both their classical counterparts and the classical notion of logical entailment for conditional constraints.<|reference_end|>
arxiv
@article{lukasiewicz2000probabilistic, title={Probabilistic Default Reasoning with Conditional Constraints}, author={Thomas Lukasiewicz}, journal={arXiv preprint arXiv:cs/0003023}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003023}, primaryClass={cs.AI} }
lukasiewicz2000probabilistic
arxiv-669424
cs/0003024
A Compiler for Ordered Logic Programs
<|reference_start|>A Compiler for Ordered Logic Programs: This paper describes a system, called PLP, for compiling ordered logic programs into standard logic programs under the answer set semantics. In an ordered logic program, rules are named by unique terms, and preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Since the result of the translation is an extended logic program, existing logic programming systems can be used as underlying reasoning engine. In particular, PLP is conceived as a front-end to the logic programming systems dlv and smodels.<|reference_end|>
arxiv
@article{delgrande2000a, title={A Compiler for Ordered Logic Programs}, author={James P. Delgrande, Torsten Schaub, Hans Tompits}, journal={arXiv preprint arXiv:cs/0003024}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003024}, primaryClass={cs.AI} }
delgrande2000a
arxiv-669425
cs/0003025
Logic Programming for Describing and Solving Planning Problems
<|reference_start|>Logic Programming for Describing and Solving Planning Problems: A logic programming paradigm which expresses solutions to problems as stable models has recently been promoted as a declarative approach to solving various combinatorial and search problems, including planning problems. In this paradigm, all program rules are considered as constraints and solutions are stable models of the rule set. This is a rather radical departure from the standard paradigm of logic programming. In this paper we revisit abductive logic programming and argue that it allows a programming style which is as declarative as programming based on stable models. However, within abductive logic programming, one has two kinds of rules. On the one hand predicate definitions (which may depend on the abducibles) which are nothing else than standard logic programs (with their non-monotonic semantics when containing with negation); on the other hand rules which constrain the models for the abducibles. In this sense abductive logic programming is a smooth extension of the standard paradigm of logic programming, not a radical departure.<|reference_end|>
arxiv
@article{bruynooghe2000logic, title={Logic Programming for Describing and Solving Planning Problems}, author={Maurice Bruynooghe (Katholieke Universiteit Leuven)}, journal={arXiv preprint arXiv:cs/0003025}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003025}, primaryClass={cs.AI cs.LO} }
bruynooghe2000logic
arxiv-669426
cs/0003026
A Comparison of Logic Programming Approaches for Representation and Solving of Constraint Satisfaction Problems
<|reference_start|>A Comparison of Logic Programming Approaches for Representation and Solving of Constraint Satisfaction Problems: Many logic programming based approaches can be used to describe and solve combinatorial search problems. On the one hand there are definite programs and constraint logic programs that compute a solution as an answer substitution to a query containing the variables of the constraint satisfaction problem. On the other hand there are approaches based on stable model semantics, abduction, and first-order logic model generation that compute solutions as models of some theory. This paper compares these different approaches from point of view of knowledge representation (how declarative are the programs) and from point of view of performance (how good are they at solving typical problems).<|reference_end|>
arxiv
@article{pelov2000a, title={A Comparison of Logic Programming Approaches for Representation and Solving of Constraint Satisfaction Problems}, author={Nikolay Pelov, Emmanuel De Mot, Maurice Bruynooghe}, journal={arXiv preprint arXiv:cs/0003026}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003026}, primaryClass={cs.LO} }
pelov2000a
arxiv-669427
cs/0003027
SLDNFA-system
<|reference_start|>SLDNFA-system: The SLDNFA-system results from the LP+ project at the K.U.Leuven, which investigates logics and proof procedures for these logics for declarative knowledge representation. Within this project inductive definition logic (ID-logic) is used as representation logic. Different solvers are being developed for this logic and one of these is SLDNFA. A prototype of the system is available and used for investigating how to solve efficiently problems represented in ID-logic.<|reference_end|>
arxiv
@article{van nuffelen2000sldnfa-system, title={SLDNFA-system}, author={Bert Van Nuffelen}, journal={arXiv preprint arXiv:cs/0003027}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003027}, primaryClass={cs.AI} }
van nuffelen2000sldnfa-system
arxiv-669428
cs/0003028
Logic Programs with Compiled Preferences
<|reference_start|>Logic Programs with Compiled Preferences: We describe an approach for compiling preferences into logic programs under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of dedicated atoms. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed theory correspond with the preferred answer sets of the original theory. Our approach allows both the specification of static orderings (as found in most previous work), in which preferences are external to a logic program, as well as orderings on sets of rules. In large part then, we are interested in describing a general methodology for uniformly incorporating preference information in a logic program. Since the result of our translation is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a compiler, available on the web, as a front-end for these programming systems.<|reference_end|>
arxiv
@article{delgrande2000logic, title={Logic Programs with Compiled Preferences}, author={James P. Delgrande, Torsten Schaub, Hans Tompits}, journal={arXiv preprint arXiv:cs/0003028}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003028}, primaryClass={cs.AI} }
delgrande2000logic
arxiv-669429
cs/0003029
Fuzzy Approaches to Abductive Inference
<|reference_start|>Fuzzy Approaches to Abductive Inference: This paper proposes two kinds of fuzzy abductive inference in the framework of fuzzy rule base. The abductive inference processes described here depend on the semantic of the rule. We distinguish two classes of interpretation of a fuzzy rule, certainty generation rules and possible generation rules. In this paper we present the architecture of abductive inference in the first class of interpretation. We give two kinds of problem that we can resolve by using the proposed models of inference.<|reference_end|>
arxiv
@article{mellouli2000fuzzy, title={Fuzzy Approaches to Abductive Inference}, author={Nedra Mellouli, Bernadette Bouchon-Meunier}, journal={arXiv preprint arXiv:cs/0003029}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003029}, primaryClass={cs.AI} }
mellouli2000fuzzy
arxiv-669430
cs/0003030
Problem solving in ID-logic with aggregates: some experiments
<|reference_start|>Problem solving in ID-logic with aggregates: some experiments: The goal of the LP+ project at the K.U.Leuven is to design an expressive logic, suitable for declarative knowledge representation, and to develop intelligent systems based on Logic Programming technology for solving computational problems using the declarative specifications. The ID-logic is an integration of typed classical logic and a definition logic. Different abductive solvers for this language are being developed. This paper is a report of the integration of high order aggregates into ID-logic and the consequences on the solver SLDNFA.<|reference_end|>
arxiv
@article{van nuffelen2000problem, title={Problem solving in ID-logic with aggregates: some experiments}, author={Bert Van Nuffelen, Marc Denecker}, journal={arXiv preprint arXiv:cs/0003030}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003030}, primaryClass={cs.AI} }
van nuffelen2000problem
arxiv-669431
cs/0003031
Optimal Belief Revision
<|reference_start|>Optimal Belief Revision: We propose a new approach to belief revision that provides a way to change knowledge bases with a minimum of effort. We call this way of revising belief states optimal belief revision. Our revision method gives special attention to the fact that most belief revision processes are directed to a specific informational objective. This approach to belief change is founded on notions such as optimal context and accessibility. For the sentential model of belief states we provide both a formal description of contexts as sub-theories determined by three parameters and a method to construct contexts. Next, we introduce an accessibility ordering for belief sets, which we then use for selecting the best (optimal) contexts with respect to the processing effort involved in the revision. Then, for finitely axiomatizable knowledge bases, we characterize a finite accessibility ranking from which the accessibility ordering for the entire base is generated and show how to determine the ranking of an arbitrary sentence in the language. Finally, we define the adjustment of the accessibility ranking of a revised base of a belief set.<|reference_end|>
arxiv
@article{vodislav2000optimal, title={Optimal Belief Revision}, author={Carmen Vodislav and Robert E. Mercer}, journal={arXiv preprint arXiv:cs/0003031}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003031}, primaryClass={cs.AI} }
vodislav2000optimal
arxiv-669432
cs/0003032
cc-Golog: Towards More Realistic Logic-Based Robot Controllers
<|reference_start|>cc-Golog: Towards More Realistic Logic-Based Robot Controllers: High-level robot controllers in realistic domains typically deal with processes which operate concurrently, change the world continuously, and where the execution of actions is event-driven as in ``charge the batteries as soon as the voltage level is low''. While non-logic-based robot control languages are well suited to express such scenarios, they fare poorly when it comes to projecting, in a conspicuous way, how the world evolves when actions are executed. On the other hand, a logic-based control language like \congolog, based on the situation calculus, is well-suited for the latter. However, it has problems expressing event-driven behavior. In this paper, we show how these problems can be overcome by first extending the situation calculus to support continuous change and event-driven behavior and then presenting \ccgolog, a variant of \congolog which is based on the extended situation calculus. One benefit of \ccgolog is that it narrows the gap in expressiveness compared to non-logic-based control languages while preserving a semantically well-founded projection mechanism.<|reference_end|>
arxiv
@article{grosskreutz2000cc-golog:, title={cc-Golog: Towards More Realistic Logic-Based Robot Controllers}, author={Henrik Grosskreutz, Gerhard Lakemeyer}, journal={arXiv preprint arXiv:cs/0003032}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003032}, primaryClass={cs.AI} }
grosskreutz2000cc-golog:
arxiv-669433
cs/0003033
Smodels: A System for Answer Set Programming
<|reference_start|>Smodels: A System for Answer Set Programming: The Smodels system implements the stable model semantics for normal logic programs. It handles a subclass of programs which contain no function symbols and are domain-restricted but supports extensions including built-in functions as well as cardinality and weight constraints. On top of this core engine more involved systems can be built. As an example, we have implemented total and partial stable model computation for disjunctive logic programs. An interesting application method is based on answer set programming, i.e., encoding an application problem as a set of rules so that its solutions are captured by the stable models of the rules. Smodels has been applied to a number of areas including planning, model checking, reachability analysis, product configuration, dynamic constraint satisfaction, and feature interaction.<|reference_end|>
arxiv
@article{niemela2000smodels:, title={Smodels: A System for Answer Set Programming}, author={Ilkka Niemela, Patrik Simons, Tommi Syrjanen}, journal={arXiv preprint arXiv:cs/0003033}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003033}, primaryClass={cs.AI} }
niemela2000smodels:
arxiv-669434
cs/0003034
E-RES: A System for Reasoning about Actions, Events and Observations
<|reference_start|>E-RES: A System for Reasoning about Actions, Events and Observations: E-RES is a system that implements the Language E, a logic for reasoning about narratives of action occurrences and observations. E's semantics is model-theoretic, but this implementation is based on a sound and complete reformulation of E in terms of argumentation, and uses general computational techniques of argumentation frameworks. The system derives sceptical non-monotonic consequences of a given reformulated theory which exactly correspond to consequences entailed by E's model-theory. The computation relies on a complimentary ability of the system to derive credulous non-monotonic consequences together with a set of supporting assumptions which is sufficient for the (credulous) conclusion to hold. E-RES allows theories to contain general action laws, statements about action occurrences, observations and statements of ramifications (or universal laws). It is able to derive consequences both forward and backward in time. This paper gives a short overview of the theoretical basis of E-RES and illustrates its use on a variety of examples. Currently, E-RES is being extended so that the system can be used for planning.<|reference_end|>
arxiv
@article{kakas2000e-res:, title={E-RES: A System for Reasoning about Actions, Events and Observations}, author={Antonis Kakas, Rob Miller, Francesca Toni}, journal={arXiv preprint arXiv:cs/0003034}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003034}, primaryClass={cs.AI} }
kakas2000e-res:
arxiv-669435
cs/0003035
Declarative Representation of Revision Strategies
<|reference_start|>Declarative Representation of Revision Strategies: In this paper we introduce a nonmonotonic framework for belief revision in which reasoning about the reliability of different pieces of information based on meta-knowledge about the information is possible, and where revision strategies can be described declaratively. The approach is based on a Poole-style system for default reasoning in which entrenchment information is represented in the logical language. A notion of inference based on the least fixed point of a monotone operator is used to make sure that all theories possess a consistent set of conclusions.<|reference_end|>
arxiv
@article{brewka2000declarative, title={Declarative Representation of Revision Strategies}, author={Gerhard Brewka}, journal={arXiv preprint arXiv:cs/0003035}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003035}, primaryClass={cs.AI cs.LO} }
brewka2000declarative
arxiv-669436
cs/0003036
DLV - A System for Declarative Problem Solving
<|reference_start|>DLV - A System for Declarative Problem Solving: DLV is an efficient logic programming and non-monotonic reasoning (LPNMR) system with advanced knowledge representation mechanisms and interfaces to classic relational database systems. Its core language is disjunctive datalog (function-free disjunctive logic programming) under the Answer Set Semantics with integrity constraints, both default and strong (or explicit) negation, and queries. Integer arithmetics and various built-in predicates are also supported. In addition DLV has several frontends, namely brave and cautious reasoning, abductive diagnosis, consistency-based diagnosis, a subset of SQL3, planning with action languages, and logic programming with inheritance.<|reference_end|>
arxiv
@article{eiter2000dlv, title={DLV - A System for Declarative Problem Solving}, author={Thomas Eiter and Wolfgang Faber and Christoph Koch and Nicola Leone and Gerald Pfeifer}, journal={arXiv preprint arXiv:cs/0003036}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003036}, primaryClass={cs.AI cs.LO} }
eiter2000dlv
arxiv-669437
cs/0003037
QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks
<|reference_start|>QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks: In this paper, we outline the prototype of an automated inference tool, called QUIP, which provides a uniform implementation for several nonmonotonic reasoning formalisms. The theoretical basis of QUIP is derived from well-known results about the computational complexity of nonmonotonic logics and exploits a representation of the different reasoning tasks in terms of quantified boolean formulae.<|reference_end|>
arxiv
@article{egly2000quip, title={QUIP - A Tool for Computing Nonmonotonic Reasoning Tasks}, author={Uwe Egly, Thomas Eiter, Hans Tompits, Stefan Woltran}, journal={arXiv preprint arXiv:cs/0003037}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003037}, primaryClass={cs.AI} }
egly2000quip
arxiv-669438
cs/0003038
A Splitting Set Theorem for Epistemic Specifications
<|reference_start|>A Splitting Set Theorem for Epistemic Specifications: Over the past decade a considerable amount of research has been done to expand logic programming languages to handle incomplete information. One such language is the language of epistemic specifications. As is usual with logic programming languages, the problem of answering queries is intractable in the general case. For extended disjunctive logic programs, an idea that has proven useful in simplifying the investigation of answer sets is the use of splitting sets. In this paper we will present an extended definition of splitting sets that will be applicable to epistemic specifications. Furthermore, an extension of the splitting set theorem will be presented. Also, a characterization of stratified epistemic specifications will be given in terms of splitting sets. This characterization leads us to an algorithmic method of computing world views of a subclass of epistemic logic programs.<|reference_end|>
arxiv
@article{watson2000a, title={A Splitting Set Theorem for Epistemic Specifications}, author={Richard Watson}, journal={arXiv preprint arXiv:cs/0003038}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003038}, primaryClass={cs.AI} }
watson2000a
arxiv-669439
cs/0003039
DES: a Challenge Problem for Nonmonotonic Reasoning Systems
<|reference_start|>DES: a Challenge Problem for Nonmonotonic Reasoning Systems: The US Data Encryption Standard, DES for short, is put forward as an interesting benchmark problem for nonmonotonic reasoning systems because (i) it provides a set of test cases of industrial relevance which shares features of randomly generated problems and real-world problems, (ii) the representation of DES using normal logic programs with the stable model semantics is simple and easy to understand, and (iii) this subclass of logic programs can be seen as an interesting special case for many other formalizations of nonmonotonic reasoning. In this paper we present two encodings of DES as logic programs: a direct one out of the standard specifications and an optimized one extending the work of Massacci and Marraro. The computational properties of the encodings are studied by using them for DES key search with the Smodels system as the implementation of the stable model semantics. Results indicate that the encodings and Smodels are quite competitive: they outperform state-of-the-art SAT-checkers working with an optimized encoding of DES into SAT and are comparable with a SAT-checker that is customized and tuned for the optimized SAT encoding.<|reference_end|>
arxiv
@article{hietalahti2000des:, title={DES: a Challenge Problem for Nonmonotonic Reasoning Systems}, author={Maarit Hietalahti, Fabio Massacci, Ilkka Niemela}, journal={arXiv preprint arXiv:cs/0003039}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003039}, primaryClass={cs.AI} }
hietalahti2000des:
arxiv-669440
cs/0003040
Implementing Integrity Constraints in an Existing Belief Revision System
<|reference_start|>Implementing Integrity Constraints in an Existing Belief Revision System: SNePS is a mature knowledge representation, reasoning, and acting system that has long contained a belief revision subsystem, called SNeBR. SNeBR is triggered when an explicit contradiction is introduced into the SNePS belief space, either because of a user's new assertion, or because of a user's query. SNeBR then makes the user decide what belief to remove from the belief space in order to restore consistency, although it provides information to help the user in making that decision. We have recently added automatic belief revision to SNeBR, by which, under certain circumstances, SNeBR decides by itself which belief to remove, and then informs the user of the decision and its consequences. We have used the well-known belief revision integrity constraints as a guide in designing automatic belief revision, taking into account, however, that SNePS's belief space is not deductively closed, and that it would be infeasible to form the deductive closure in order to decide what belief to remove. This paper briefly describes SNeBR both before and after this revision, discusses how we adapted the integrity constraints for this purpose, and gives an example of the new SNeBR in action.<|reference_end|>
arxiv
@article{johnson2000implementing, title={Implementing Integrity Constraints in an Existing Belief Revision System}, author={Frances L. Johnson and Stuart C. Shapiro}, journal={arXiv preprint arXiv:cs/0003040}, year={2000}, number={2000-03}, archivePrefix={arXiv}, eprint={cs/0003040}, primaryClass={cs.AI cs.LO} }
johnson2000implementing
arxiv-669441
cs/0003041
Coherence, Belief Expansion and Bayesian Networks
<|reference_start|>Coherence, Belief Expansion and Bayesian Networks: We construct a probabilistic coherence measure for information sets which determines a partial coherence ordering. This measure is applied in constructing a criterion for expanding our beliefs in the face of new information. A number of idealizations are being made which can be relaxed by an appeal to Bayesian Networks.<|reference_end|>
arxiv
@article{bovens2000coherence,, title={Coherence, Belief Expansion and Bayesian Networks}, author={Luc Bovens and Stephan Hartmann}, journal={arXiv preprint arXiv:cs/0003041}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003041}, primaryClass={cs.AI cs.LO} }
bovens2000coherence,
arxiv-669442
cs/0003042
Fages' Theorem and Answer Set Programming
<|reference_start|>Fages' Theorem and Answer Set Programming: We generalize a theorem by Francois Fages that describes the relationship between the completion semantics and the answer set semantics for logic programs with negation as failure. The study of this relationship is important in connection with the emergence of answer set programming. Whenever the two semantics are equivalent, answer sets can be computed by a satisfiability solver, and the use of answer set solvers such as smodels and dlv is unnecessary. A logic programming representation of the blocks world due to Ilkka Niemelae is discussed as an example.<|reference_end|>
arxiv
@article{babovich2000fages', title={Fages' Theorem and Answer Set Programming}, author={Yuliya Babovich, Esra Erdem and Vladimir Lifschitz}, journal={arXiv preprint arXiv:cs/0003042}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003042}, primaryClass={cs.AI} }
babovich2000fages'
arxiv-669443
cs/0003043
Automatic Classification of Text Databases through Query Probing
<|reference_start|>Automatic Classification of Text Databases through Query Probing: Many text databases on the web are "hidden" behind search interfaces, and their documents are only accessible through querying. Search engines typically ignore the contents of such search-only databases. Recently, Yahoo-like directories have started to manually organize these databases into categories that users can browse to find these valuable resources. We propose a novel strategy to automate the classification of search-only text databases. Our technique starts by training a rule-based document classifier, and then uses the classifier's rules to generate probing queries. The queries are sent to the text databases, which are then classified based on the number of matches that they produce for each query. We report some initial exploratory experiments that show that our approach is promising to automatically characterize the contents of text databases accessible on the web.<|reference_end|>
arxiv
@article{ipeirotis2000automatic, title={Automatic Classification of Text Databases through Query Probing}, author={Panagiotis Ipeirotis, Luis Gravano and Mehran Sahami}, journal={arXiv preprint arXiv:cs/0003043}, year={2000}, number={CUCS-004-00}, archivePrefix={arXiv}, eprint={cs/0003043}, primaryClass={cs.DB cs.IR} }
ipeirotis2000automatic
arxiv-669444
cs/0003044
On the tractable counting of theory models and its application to belief revision and truth maintenance
<|reference_start|>On the tractable counting of theory models and its application to belief revision and truth maintenance: We introduced decomposable negation normal form (DNNF) recently as a tractable form of propositional theories, and provided a number of powerful logical operations that can be performed on it in polynomial time. We also presented an algorithm for compiling any conjunctive normal form (CNF) into DNNF and provided a structure-based guarantee on its space and time complexity. We present in this paper a linear-time algorithm for converting an ordered binary decision diagram (OBDD) representation of a propositional theory into an equivalent DNNF, showing that DNNFs scale as well as OBDDs. We also identify a subclass of DNNF which we call deterministic DNNF, d-DNNF, and show that the previous complexity guarantees on compiling DNNF continue to hold for this stricter subclass, which has stronger properties. In particular, we present a new operation on d-DNNF which allows us to count its models under the assertion, retraction and flipping of every literal by traversing the d-DNNF twice. That is, after such traversal, we can test in constant-time: the entailment of any literal by the d-DNNF, and the consistency of the d-DNNF under the retraction or flipping of any literal. We demonstrate the significance of these new operations by showing how they allow us to implement linear-time, complete truth maintenance systems and linear-time, complete belief revision systems for two important classes of propositional theories.<|reference_end|>
arxiv
@article{darwiche2000on, title={On the tractable counting of theory models and its application to belief revision and truth maintenance}, author={Adnan Darwiche}, journal={arXiv preprint arXiv:cs/0003044}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003044}, primaryClass={cs.AI} }
darwiche2000on
arxiv-669445
cs/0003045
Termination Proofs for Logic Programs with Tabling
<|reference_start|>Termination Proofs for Logic Programs with Tabling: Tabled logic programming is receiving increasing attention in the Logic Programming community. It avoids many of the shortcomings of SLD execution and provides a more flexible and often extremely efficient execution mechanism for logic programs. In particular, tabled execution of logic programs terminates more often than execution based on SLD-resolution. In this article, we introduce two notions of universal termination of logic programming with Tabling: quasi-termination and (the stronger notion of) LG-termination. We present sufficient conditions for these two notions of termination, namely quasi-acceptability and LG-acceptability, and we show that these conditions are also necessary in case the tabling is well-chosen. Starting from these conditions, we give modular termination proofs, i.e., proofs capable of combining termination proofs of separate programs to obtain termination proofs of combined programs. Finally, in the presence of mode information, we state sufficient conditions which form the basis for automatically proving termination in a constraint-based way.<|reference_end|>
arxiv
@article{verbaeten2000termination, title={Termination Proofs for Logic Programs with Tabling}, author={Sofie Verbaeten, Danny De Schreye, Konstantinos Sagonas}, journal={arXiv preprint arXiv:cs/0003045}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003045}, primaryClass={cs.LO} }
verbaeten2000termination
arxiv-669446
cs/0003046
Linear Tabulated Resolution Based on Prolog Control Strategy
<|reference_start|>Linear Tabulated Resolution Based on Prolog Control Strategy: Infinite loops and redundant computations are long recognized open problems in Prolog. Two ways have been explored to resolve these problems: loop checking and tabling. Loop checking can cut infinite loops, but it cannot be both sound and complete even for function-free logic programs. Tabling seems to be an effective way to resolve infinite loops and redundant computations. However, existing tabulated resolutions, such as OLDT-resolution, SLG- resolution, and Tabulated SLS-resolution, are non-linear because they rely on the solution-lookup mode in formulating tabling. The principal disadvantage of non-linear resolutions is that they cannot be implemented using a simple stack-based memory structure like that in Prolog. Moreover, some strictly sequential operators such as cuts may not be handled as easily as in Prolog. In this paper, we propose a hybrid method to resolve infinite loops and redundant computations. We combine the ideas of loop checking and tabling to establish a linear tabulated resolution called TP-resolution. TP-resolution has two distinctive features: (1) It makes linear tabulated derivations in the same way as Prolog except that infinite loops are broken and redundant computations are reduced. It handles cuts as effectively as Prolog. (2) It is sound and complete for positive logic programs with the bounded-term-size property. The underlying algorithm can be implemented by an extension to any existing Prolog abstract machines such as WAM or ATOAM.<|reference_end|>
arxiv
@article{shen2000linear, title={Linear Tabulated Resolution Based on Prolog Control Strategy}, author={Yi-Dong Shen, Li-Yan Yuan, Jia-Huai You, Neng-Fa Zhou}, journal={Theory and Practice of Logic Programming 1(1):71-103, 2001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003046}, primaryClass={cs.AI cs.LO} }
shen2000linear
arxiv-669447
cs/0003047
BDD-based reasoning in the fluent calculus - first results
<|reference_start|>BDD-based reasoning in the fluent calculus - first results: The paper reports on first preliminary results and insights gained in a project aiming at implementing the fluent calculus using methods and techniques based on binary decision diagrams. After reporting on an initial experiment showing promising results we discuss our findings concerning various techniques and heuristics used to speed up the reasoning process.<|reference_end|>
arxiv
@article{hoelldobler2000bdd-based, title={BDD-based reasoning in the fluent calculus - first results}, author={Steffen Hoelldobler and Hans-Peter Stoerr}, journal={arXiv preprint arXiv:cs/0003047}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003047}, primaryClass={cs.AI} }
hoelldobler2000bdd-based
arxiv-669448
cs/0003048
PAL: Pertinence Action Language
<|reference_start|>PAL: Pertinence Action Language: The current document contains a brief description of a system for Reasoning about Actions and Change called PAL (Pertinence Action Language) which makes use of several reasoning properties extracted from a Temporal Expert Systems tool called Medtool.<|reference_end|>
arxiv
@article{cabalar2000pal:, title={PAL: Pertinence Action Language}, author={Pedro Cabalar, Manuel Cabarcos, Ramon P. Otero}, journal={arXiv preprint arXiv:cs/0003048}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003048}, primaryClass={cs.AI cs.LO} }
cabalar2000pal:
arxiv-669449
cs/0003049
Planning with Incomplete Information
<|reference_start|>Planning with Incomplete Information: Planning is a natural domain of application for frameworks of reasoning about actions and change. In this paper we study how one such framework, the Language E, can form the basis for planning under (possibly) incomplete information. We define two types of plans: weak and safe plans, and propose a planner, called the E-Planner, which is often able to extend an initial weak plan into a safe plan even though the (explicit) information available is incomplete, e.g. for cases where the initial state is not completely known. The E-Planner is based upon a reformulation of the Language E in argumentation terms and a natural proof theory resulting from the reformulation. It uses an extension of this proof theory by means of abduction for the generation of plans and adopts argumentation-based techniques for extending weak plans into safe plans. We provide representative examples illustrating the behaviour of the E-Planner, in particular for cases where the status of fluents is incompletely known.<|reference_end|>
arxiv
@article{kakas2000planning, title={Planning with Incomplete Information}, author={Antonis Kakas, Rob Miller and Francesca Toni}, journal={arXiv preprint arXiv:cs/0003049}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003049}, primaryClass={cs.AI} }
kakas2000planning
arxiv-669450
cs/0003050
A tableau methodology for deontic conditional logics
<|reference_start|>A tableau methodology for deontic conditional logics: In this paper we present a theorem proving methodology for a restricted but significant fragment of the conditional language made up of (boolean combinations of) conditional statements with unnested antecedents. The method is based on the possible world semantics for conditional logics. The KEM label formalism, designed to account for the semantics of normal modal logics, is easily adapted to the semantics of conditional logics by simply indexing labels with formulas. The inference rules are provided by the propositional system KE+ - a tableau-like analytic proof system devised to be used both as a refutation and a direct method of proof - enlarged with suitable elimination rules for the conditional connective. The theorem proving methodology we are going to present can be viewed as a first step towards developing an appropriate algorithmic framework for several conditional logics for (defeasible) conditional obligation.<|reference_end|>
arxiv
@article{artosi2000a, title={A tableau methodology for deontic conditional logics}, author={Alberto Artosi, Guido Governatori}, journal={Deon'98. 4th International Workshop on Deontic Logic in Computer Science. CIRFID, Bologna, 1998, 75-91}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003050}, primaryClass={cs.LO cs.AI} }
artosi2000a
arxiv-669451
cs/0003051
Local Diagnosis
<|reference_start|>Local Diagnosis: In an earlier work, we have presented operations of belief change which only affect the relevant part of a belief base. In this paper, we propose the application of the same strategy to the problem of model-based diangosis. We first isolate the subset of the system description which is relevant for a given observation and then solve the diagnosis problem for this subset.<|reference_end|>
arxiv
@article{wassermann2000local, title={Local Diagnosis}, author={Renata Wassermann}, journal={arXiv preprint arXiv:cs/0003051}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003051}, primaryClass={cs.AI} }
wassermann2000local
arxiv-669452
cs/0003052
A Consistency-Based Model for Belief Change: Preliminary Report
<|reference_start|>A Consistency-Based Model for Belief Change: Preliminary Report: We present a general, consistency-based framework for belief change. Informally, in revising K by A, we begin with A and incorporate as much of K as consistently possible. Formally, a knowledge base K and sentence A are expressed, via renaming propositions in K, in separate languages. Using a maximization process, we assume the languages are the same insofar as consistently possible. Lastly, we express the resultant knowledge base in a single language. There may be more than one way in which A can be so extended by K: in choice revision, one such ``extension'' represents the revised state; alternately revision consists of the intersection of all such extensions. The most general formulation of our approach is flexible enough to express other approaches to revision and update, the merging of knowledge bases, and the incorporation of static and dynamic integrity constraints. Our framework differs from work based on ordinal conditional functions, notably with respect to iterated revision. We argue that the approach is well-suited for implementation: the choice revision operator gives better complexity results than general revision; the approach can be expressed in terms of a finite knowledge base; and the scope of a revision can be restricted to just those propositions mentioned in the sentence for revision A.<|reference_end|>
arxiv
@article{delgrande2000a, title={A Consistency-Based Model for Belief Change: Preliminary Report}, author={James Delgrande and Torsten Schaub}, journal={arXiv preprint arXiv:cs/0003052}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003052}, primaryClass={cs.AI} }
delgrande2000a
arxiv-669453
cs/0003053
Security of the Cao-Li Public Key Cryptosystem
<|reference_start|>Security of the Cao-Li Public Key Cryptosystem: We show that the Cao-Li cryptosystem proposed in \cite{CL1} is not secure. Its private key can be reconstructed from its public key using elementary means such as LU-decomposition and Euclidean algorithm.<|reference_end|>
arxiv
@article{lim2000security, title={Security of the Cao-Li Public Key Cryptosystem}, author={Lek-Heng Lim}, journal={Electronics Letters, Volume 34 Number 2, pp. 170-172, January 22 1998}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003053}, primaryClass={cs.CR math.NT} }
lim2000security
arxiv-669454
cs/0003054
A Problem-Specific Fault-Tolerance Mechanism for Asynchronous, Distributed Systems
<|reference_start|>A Problem-Specific Fault-Tolerance Mechanism for Asynchronous, Distributed Systems: The idle computers on a local area, campus area, or even wide area network represent a significant computational resource---one that is, however, also unreliable, heterogeneous, and opportunistic. This type of resource has been used effectively for embarrassingly parallel problems but not for more tightly coupled problems. We describe an algorithm that allows branch-and-bound problems to be solved in such environments. In designing this algorithm, we faced two challenges: (1) scalability, to effectively exploit the variably sized pools of resources available, and (2) fault tolerance, to ensure the reliability of services. We achieve scalability through a fully decentralized algorithm, by using a membership protocol for managing dynamically available resources. However, this fully decentralized design makes achieving reliability even more challenging. We guarantee fault tolerance in the sense that the loss of up to all but one resource will not affect the quality of the solution. For propagating information efficiently, we use epidemic communication for both the membership protocol and the fault-tolerance mechanism. We have developed a simulation framework that allows us to evaluate design alternatives. Results obtained in this framework suggest that our techniques can execute scalably and reliably.<|reference_end|>
arxiv
@article{iamnitchi2000a, title={A Problem-Specific Fault-Tolerance Mechanism for Asynchronous, Distributed Systems}, author={Adriana Iamnitchi and Ian Foster}, journal={arXiv preprint arXiv:cs/0003054}, year={2000}, number={TR-00-01, The University of Chicago}, archivePrefix={arXiv}, eprint={cs/0003054}, primaryClass={cs.DC} }
iamnitchi2000a
arxiv-669455
cs/0003055
TnT - A Statistical Part-of-Speech Tagger
<|reference_start|>TnT - A Statistical Part-of-Speech Tagger: Trigrams'n'Tags (TnT) is an efficient statistical part-of-speech tagger. Contrary to claims found elsewhere in the literature, we argue that a tagger based on Markov models performs at least as well as other current approaches, including the Maximum Entropy framework. A recent comparison has even shown that TnT performs significantly better for the tested corpora. We describe the basic model of TnT, the techniques used for smoothing and for handling unknown words. Furthermore, we present evaluations on two corpora.<|reference_end|>
arxiv
@article{brants2000tnt, title={TnT - A Statistical Part-of-Speech Tagger}, author={Thorsten Brants (Saarland University, Germany)}, journal={Proceedings of ANLP-2000, Seattle, WA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003055}, primaryClass={cs.CL} }
brants2000tnt
arxiv-669456
cs/0003056
A note on the Declarative reading(s) of Logic Programming
<|reference_start|>A note on the Declarative reading(s) of Logic Programming: This paper analyses the declarative readings of logic programming. Logic programming - and negation as failure - has no unique declarative reading. One common view is that logic programming is a logic for default reasoning, a sub-formalism of default logic or autoepistemic logic. In this view, negation as failure is a modal operator. In an alternative view, a logic program is interpreted as a definition. In this view, negation as failure is classical objective negation. From a commonsense point of view, there is definitely a difference between these views. Surprisingly though, both types of declarative readings lead to grosso modo the same model semantics. This note investigates the causes for this.<|reference_end|>
arxiv
@article{denecker2000a, title={A note on the Declarative reading(s) of Logic Programming}, author={Marc Denecker}, journal={arXiv preprint arXiv:cs/0003056}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003056}, primaryClass={cs.LO cs.AI} }
denecker2000a
arxiv-669457
cs/0003057
XNMR: A tool for knowledge bases exploration
<|reference_start|>XNMR: A tool for knowledge bases exploration: XNMR is a system designed to explore the results of combining the well-founded semantics system XSB with the stable-models evaluator SMODELS. Its main goal is to work as a tool for fast and interactive exploration of knowledge bases.<|reference_end|>
arxiv
@article{castro2000xnmr:, title={XNMR: A tool for knowledge bases exploration}, author={L. Castro, D. Warren}, journal={arXiv preprint arXiv:cs/0003057}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003057}, primaryClass={cs.LO cs.AI} }
castro2000xnmr:
arxiv-669458
cs/0003058
A note on knowledge-based programs and specifications
<|reference_start|>A note on knowledge-based programs and specifications: Knowledge-based program are programs with explicit tests for knowledge. They have been used successfully in a number of applications. Sanders has pointed out what seem to be a counterintuitive property of knowledge-based programs. Roughly speaking, they do not satisfy a certain monotonicity property, while standard programs (ones without tests for knowledge) do. It is shown that there are two ways of defining the monotonicity property, which agree for standard programs. Knowledge-based programs satisfy the first, but do not satisfy the second. It is further argued by example that the fact that they do not satisfy the second is actually a feature, not a problem. Moreover, once we allow the more general class of knowledge-based specifications, standard programs do not satisfy the monotonicity property either.<|reference_end|>
arxiv
@article{halpern2000a, title={A note on knowledge-based programs and specifications}, author={Joseph Y. Halpern}, journal={arXiv preprint arXiv:cs/0003058}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003058}, primaryClass={cs.DC cs.LO} }
halpern2000a
arxiv-669459
cs/0003059
SATEN: An Object-Oriented Web-Based Revision and Extraction Engine
<|reference_start|>SATEN: An Object-Oriented Web-Based Revision and Extraction Engine: SATEN is an object-oriented web-based extraction and belief revision engine. It runs on any computer via a Java 1.1 enabled browser such as Netscape 4. SATEN performs belief revision based on the AGM approach. The extraction and belief revision reasoning engines operate on a user specified ranking of information. One of the features of SATEN is that it can be used to integrate mutually inconsistent commensuate rankings into a consistent ranking.<|reference_end|>
arxiv
@article{williams2000saten:, title={SATEN: An Object-Oriented Web-Based Revision and Extraction Engine}, author={Mary-Anne Williams and Aidan Sims}, journal={arXiv preprint arXiv:cs/0003059}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003059}, primaryClass={cs.AI} }
williams2000saten:
arxiv-669460
cs/0003060
Message Classification in the Call Center
<|reference_start|>Message Classification in the Call Center: Customer care in technical domains is increasingly based on e-mail communication, allowing for the reproduction of approved solutions. Identifying the customer's problem is often time-consuming, as the problem space changes if new products are launched. This paper describes a new approach to the classification of e-mail requests based on shallow text processing and machine learning techniques. It is implemented within an assistance system for call center agents that is used in a commercial setting.<|reference_end|>
arxiv
@article{busemann2000message, title={Message Classification in the Call Center}, author={Stephan Busemann, Sven Schmeier and Roman G. Arens}, journal={Proceedings of ANLP-2000, Seattle, WA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003060}, primaryClass={cs.CL} }
busemann2000message
arxiv-669461
cs/0003061
dcs: An Implementation of DATALOG with Constraints
<|reference_start|>dcs: An Implementation of DATALOG with Constraints: Answer-set programming (ASP) has emerged recently as a viable programming paradigm. We describe here an ASP system, DATALOG with constraints or DC, based on non-monotonic logic. Informally, DC theories consist of propositional clauses (constraints) and of Horn rules. The semantics is a simple and natural extension of the semantics of the propositional logic. However, thanks to the presence of Horn rules in the system, modeling of transitive closure becomes straightforward. We describe the syntax, use and implementation of DC and provide experimental results.<|reference_end|>
arxiv
@article{east2000dcs:, title={dcs: An Implementation of DATALOG with Constraints}, author={Deborah East and Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0003061}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003061}, primaryClass={cs.AI} }
east2000dcs:
arxiv-669462
cs/0003062
Reasoning with Higher-Order Abstract Syntax in a Logical Framework
<|reference_start|>Reasoning with Higher-Order Abstract Syntax in a Logical Framework: Logical frameworks based on intuitionistic or linear logics with higher-type quantification have been successfully used to give high-level, modular, and formal specifications of many important judgments in the area of programming languages and inference systems. Given such specifications, it is natural to consider proving properties about the specified systems in the framework: for example, given the specification of evaluation for a functional programming language, prove that the language is deterministic or that evaluation preserves types. One challenge in developing a framework for such reasoning is that higher-order abstract syntax (HOAS), an elegant and declarative treatment of object-level abstraction and substitution, is difficult to treat in proofs involving induction. In this paper, we present a meta-logic that can be used to reason about judgments coded using HOAS; this meta-logic is an extension of a simple intuitionistic logic that admits higher-order quantification over simply typed lambda-terms (key ingredients for HOAS) as well as induction and a notion of definition. We explore the difficulties of formal meta-theoretic analysis of HOAS encodings by considering encodings of intuitionistic and linear logics, and formally derive the admissibility of cut for important subsets of these logics. We then propose an approach to avoid the apparent tradeoff between the benefits of higher-order abstract syntax and the ability to analyze the resulting encodings. We illustrate this approach through examples involving the simple functional and imperative programming languages PCF and PCF:=. We formally derive such properties as unicity of typing, subject reduction, determinacy of evaluation, and the equivalence of transition semantics and natural semantics presentations of evaluation.<|reference_end|>
arxiv
@article{mcdowell2000reasoning, title={Reasoning with Higher-Order Abstract Syntax in a Logical Framework}, author={Raymond C. McDowell and Dale A. Miller}, journal={arXiv preprint arXiv:cs/0003062}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003062}, primaryClass={cs.LO cs.PL} }
mcdowell2000reasoning
arxiv-669463
cs/0003063
Statistics and implementation of APRNGs
<|reference_start|>Statistics and implementation of APRNGs: This paper has been temporarily withdrawn by the author(s),<|reference_end|>
arxiv
@article{guimond2000statistics, title={Statistics and implementation of APRNGs}, author={Louis-Sebastien Guimond, Jan Patera and Jiri Patera}, journal={arXiv preprint arXiv:cs/0003063}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003063}, primaryClass={cs.DM} }
guimond2000statistics
arxiv-669464
cs/0003064
A network file system over HTTP: remote access and modification of files and "files"
<|reference_start|>A network file system over HTTP: remote access and modification of files and "files": The goal of the present HTTPFS project is to enable access to remote files, directories, and other containers through an HTTP pipe. HTTPFS system permits retrieval, creation and modification of these resources as if they were regular files and directories on a local filesystem. The remote host can be any UNIX or Win9x/WinNT box that is capable of running a Perl CGI script and accessible either directly or via a web proxy or a gateway. HTTPFS runs entirely in user space. The current implementation fully supports reading as well as creating, writing, appending, and truncating of files on a remote HTTP host. HTTPFS provides an isolation level for concurrent file access stronger than the one mandated by POSIX file system semantics, closer to that of AFS. Both an API with familiar open(), read(), write(), close(), etc. calls, and an interactive interface, via the popular Midnight Commander file browser, are provided.<|reference_end|>
arxiv
@article{kiselyov2000a, title={A network file system over HTTP: remote access and modification of files and "files"}, author={Oleg Kiselyov}, journal={arXiv preprint arXiv:cs/0003064}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003064}, primaryClass={cs.OS cs.NI} }
kiselyov2000a
arxiv-669465
cs/0003065
Image Compression with Iterated Function Systems, Finite Automata and Zerotrees: Grand Unification
<|reference_start|>Image Compression with Iterated Function Systems, Finite Automata and Zerotrees: Grand Unification: Fractal image compression, Culik's image compression and zerotree prediction coding of wavelet image decomposition coefficients succeed only because typical images being compressed possess a significant degree of self-similarity. Besides the common concept, these methods turn out to be even more tightly related, to the point of algorithmical reducibility of one technique to another. The goal of the present paper is to demonstrate these relations. The paper offers a plain-term interpretation of Culik's image compression, in regular image processing terms, without resorting to finite state machines and similar lofty language. The interpretation is shown to be algorithmically related to an IFS fractal image compression method: an IFS can be exactly transformed into Culik's image code. Using this transformation, we will prove that in a self-similar (part of an) image any zero wavelet coefficient is the root of a zerotree, or its branch. The paper discusses the zerotree coding of (wavelet/projection) coefficients as a common predictor/corrector, applied vertically through different layers of a multiresolutional decomposition, rather than within the same view. This interpretation leads to an insight into the evolution of image compression techniques: from a causal single-layer prediction, to non-causal same-view predictions (wavelet decomposition among others) and to a causal cross-layer prediction (zero-trees, Culik's method).<|reference_end|>
arxiv
@article{kiselyov2000image, title={Image Compression with Iterated Function Systems, Finite Automata and Zerotrees: Grand Unification}, author={Oleg Kiselyov, Paul Fisher}, journal={arXiv preprint arXiv:cs/0003065}, year={2000}, doi={10.1109/DCC.1996.488375}, archivePrefix={arXiv}, eprint={cs/0003065}, primaryClass={cs.CV} }
kiselyov2000image
arxiv-669466
cs/0003066
Specifying and Implementing Security Policies Using LaSCO, the Language for Security Constraints on Objects
<|reference_start|>Specifying and Implementing Security Policies Using LaSCO, the Language for Security Constraints on Objects: In this dissertation, we present LaSCO, the Language for Security Constraints on Objects, a new approach to expressing security policies using policy graphs and present a method for enforcing policies so expressed. Other approaches for stating security policies fall short of what is desirable with respect to either policy clarity, executability, or the precision with which a policy may be expressed. However, LaSCO is designed to have those three desirable properties of a security policy language as well as: relevance for many different systems, statement of policies at an appropriate level of detail, user friendliness for both casual and expert users, and amenability to formal reasoning. In LaSCO, the constraints of a policy are stated as directed graphs annotated with expressions describing the situation under which the policy applies and what the requirement is. LaSCO may be used for such diverse applications as executing programs, file systems, operating systems, distributed systems, and networks. Formal operational semantics have been defined for LaSCO. An architecture for implementing LaSCO on any system, is presented along with an implementation of the system-independent portion in Perl. Using this, we have implemented LaSCO for Java programs, preventing Java programs from violating policy. A GUI to facilitate writing policies is provided. We have studied applying LaSCO to a network as viewed by GrIDS, a distributed intrusion detection system for large networks, and propose a design. We conclude that LaSCO has characteristics that enable its use on different types of systems throughout the process of precisely expressing a policy, understanding the implications of a policy, and implementing it on a system.<|reference_end|>
arxiv
@article{hoagland2000specifying, title={Specifying and Implementing Security Policies Using LaSCO, the Language for Security Constraints on Objects}, author={James A. Hoagland}, journal={arXiv preprint arXiv:cs/0003066}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003066}, primaryClass={cs.CR} }
hoagland2000specifying
arxiv-669467
cs/0003067
Detecting Unsolvable Queries for Definite Logic Programs
<|reference_start|>Detecting Unsolvable Queries for Definite Logic Programs: In solving a query, the SLD proof procedure for definite programs sometimes searches an infinite space for a non existing solution. For example, querying a planner for an unreachable goal state. Such programs motivate the development of methods to prove the absence of a solution. Considering the definite program and the query ``<- Q'' as clauses of a first order theory, one can apply model generators which search for a finite interpretation in which the program clauses as well as the clause ``false <- Q'' are true. This paper develops a new approach which exploits the fact that all clauses are definite. It is based on a goal directed abductive search in the space of finite pre-interpretations for a pre-interpretation such that ``Q'' is false in the least model of the program based on it. Several methods for efficiently searching the space of pre-interpretations are presented. Experimental results confirm that our approach find solutions with less search than with the use of a first order model generator.<|reference_end|>
arxiv
@article{bruynooghe2000detecting, title={Detecting Unsolvable Queries for Definite Logic Programs}, author={Maurice Bruynooghe (1), Henk Vandecasteele (1), D. Andre de Waal (2), Marc Denecker (1) ((1) Katholieke Universiteit Leuven, Belgium, (2) Potchefstroom University for Christian Higher Education, South Africa)}, journal={Journal of Functional and Logic Programming, Vol. 1999, 1-35, 1999}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003067}, primaryClass={cs.LO cs.AI} }
bruynooghe2000detecting
arxiv-669468
cs/0003068
A Polyvariant Binding-Time Analysis for Off-line Partial Deduction
<|reference_start|>A Polyvariant Binding-Time Analysis for Off-line Partial Deduction: We study the notion of binding-time analysis for logic programs. We formalise the unfolding aspect of an on-line partial deduction system as a Prolog program. Using abstract interpretation, we collect information about the run-time behaviour of the program. We use this information to make the control decisions about the unfolding at analysis time and to turn the on-line system into an off-line system. We report on some initial experiments.<|reference_end|>
arxiv
@article{bruynooghe2000a, title={A Polyvariant Binding-Time Analysis for Off-line Partial Deduction}, author={Maurice Bruynooghe, Michael Leuschel, Konstantinos Sagonas (Katholieke Universiteit Leuven, Belgium)}, journal={arXiv preprint arXiv:cs/0003068}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003068}, primaryClass={cs.PL cs.LO} }
bruynooghe2000a
arxiv-669469
cs/0003069
Proving Failure of Queries for Definite Logic Programs Using XSB-Prolog
<|reference_start|>Proving Failure of Queries for Definite Logic Programs Using XSB-Prolog: Proving failure of queries for definite logic programs can be done by constructing a finite model of the program in which the query is false. A general purpose model generator for first order logic can be used for this. A recent paper presented at PLILP98 shows how the peculiarities of definite programs can be exploited to obtain a better solution. There a procedure is described which combines abduction with tabulation and uses a meta-interpreter for heuristic control of the search. The current paper shows how similar results can be obtained by direct execution under the standard tabulation of the XSB-Prolog system. The loss of control is compensated for by better intelligent backtracking and more accurate failure analysis.<|reference_end|>
arxiv
@article{pelov2000proving, title={Proving Failure of Queries for Definite Logic Programs Using XSB-Prolog}, author={Nikolay Pelov, Maurice Bruynooghe}, journal={arXiv preprint arXiv:cs/0003069}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003069}, primaryClass={cs.LO} }
pelov2000proving
arxiv-669470
cs/0003070
The (Lazy) Functional Side of Logic Programming
<|reference_start|>The (Lazy) Functional Side of Logic Programming: The possibility of translating logic programs into functional ones has long been a subject of investigation. Common to the many approaches is that the original logic program, in order to be translated, needs to be well-moded and this has led to the common understanding that these programs can be considered to be the ``functional part'' of logic programs. As a consequence of this it has become widely accepted that ``complex'' logical variables, the possibility of a dynamic selection rule, and general properties of non-well-moded programs are exclusive features of logic programs. This is not quite true, as some of these features are naturally found in lazy functional languages. We readdress the old question of what features are exclusive to the logic programming paradigm by defining a simple translation applicable to a wider range of logic programs, and demonstrate that the current circumscription is unreasonably restrictive.<|reference_end|>
arxiv
@article{etalle2000the, title={The (Lazy) Functional Side of Logic Programming}, author={S. Etalle and J. Mountjoy}, journal={arXiv preprint arXiv:cs/0003070}, year={2000}, number={CS-00-02}, archivePrefix={arXiv}, eprint={cs/0003070}, primaryClass={cs.PL cs.LO} }
etalle2000the
arxiv-669471
cs/0003071
Axiomatic Synthesis of Computer Programs and Computability Theorems
<|reference_start|>Axiomatic Synthesis of Computer Programs and Computability Theorems: We introduce a set of eight universal Rules of Inference by which computer programs with known properties (axioms) are transformed into new programs with known properties (theorems). Axioms are presented to formalize a segment of Number Theory, DataBase retrieval and Computability Theory. The resulting Program Calculus is used to generate programs to (1) Determine if one number is a factor of another. (2) List all employees who earn more than their manager. (3) List the set of programs that halt no on themselves, thus proving that it is recursively enumerable. The well-known fact that the set of programs that do not halt yes on themselves is not recursively enumerable is formalized as a program requirement that has no solution, an Incompleteness Axiom. Thus, any axioms (programs) which could be used to generate this program are themselves unattainable. Such proofs are presented to formally generate several additional theorems, including (4) The halting problem is unsolvable. Open problems and future research is discussed, including the use of temporary sort files, programs that calculate statistics (such as counts and sums), the synthesis of programs to solve other well-known problems from Number Theory, Logic, DataBase retrieval and Computability Theory, application to Programming Language Semantics, and the formalization of incompleteness results from Logic and the semantic paradoxes.<|reference_end|>
arxiv
@article{volkstorf2000axiomatic, title={Axiomatic Synthesis of Computer Programs and Computability Theorems}, author={Charlie Volkstorf}, journal={arXiv preprint arXiv:cs/0003071}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003071}, primaryClass={cs.LO} }
volkstorf2000axiomatic
arxiv-669472
cs/0003072
MOO: A Methodology for Online Optimization through Mining the Offline Optimum
<|reference_start|>MOO: A Methodology for Online Optimization through Mining the Offline Optimum: Ports, warehouses and courier services have to decide online how an arriving task is to be served in order that cost is minimized (or profit maximized). These operators have a wealth of historical data on task assignments; can these data be mined for knowledge or rules that can help the decision-making? MOO is a novel application of data mining to online optimization. The idea is to mine (logged) expert decisions or the offline optimum for rules that can be used for online decisions. It requires little knowledge about the task distribution and cost structure, and is applicable to a wide range of problems. This paper presents a feasibility study of the methodology for the well-known k-server problem. Experiments with synthetic data show that optimization can be recast as classification of the optimum decisions; the resulting heuristic can achieve the optimum for strong request patterns, consistently outperforms other heuristics for weak patterns, and is robust despite changes in cost model.<|reference_end|>
arxiv
@article{lee2000moo:, title={MOO: A Methodology for Online Optimization through Mining the Offline Optimum}, author={Jason W.H. Lee, Y.C. Tay, Anthony K.H. Tung (National University of Singapore)}, journal={arXiv preprint arXiv:cs/0003072}, year={2000}, number={Research Report No. 743}, archivePrefix={arXiv}, eprint={cs/0003072}, primaryClass={cs.DS cs.LG} }
lee2000moo:
arxiv-669473
cs/0003073
Proceedings of the 8th International Workshop on Non-Monotonic Reasoning, NMR'2000
<|reference_start|>Proceedings of the 8th International Workshop on Non-Monotonic Reasoning, NMR'2000: The papers gathered in this collection were presented at the 8th International Workshop on Nonmonotonic Reasoning, NMR2000. The series was started by John McCarthy in 1978. The first international NMR workshop was held at Mohonk Mountain House, New Paltz, New York in June, 1984, and was organized by Ray Reiter and Bonnie Webber. In the last 10 years the area of nonmonotonic reasoning has seen a number of important developments. Significant theoretical advances were made in the understanding of general abstract principles underlying nonmonotonicity. Key results on the expressibility and computational complexity of nonmonotonic logics were established. The role of nonmonotonic reasoning in belief revision, abduction, reasoning about action, planing and uncertainty was further clarified. Several successful NMR systems were built and used in applications such as planning, scheduling, logic programming and constraint satisfaction. The papers in the proceedings reflect these recent advances in the field. They are grouped into sections corresponding to special sessions as they were held at the workshop: 1. General NMR track 2. Abductive reasonig 3. Belief revision: theory and practice 4. Representing action and planning 5. Systems descriptions and demonstrations 6. Uncertainty frameworks in NMR<|reference_end|>
arxiv
@article{baral2000proceedings, title={Proceedings of the 8th International Workshop on Non-Monotonic Reasoning, NMR'2000}, author={Chitta Baral and Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0003073}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003073}, primaryClass={cs.AI cs.LO} }
baral2000proceedings
arxiv-669474
cs/0003074
A Finite State and Data-Oriented Method for Grapheme to Phoneme Conversion
<|reference_start|>A Finite State and Data-Oriented Method for Grapheme to Phoneme Conversion: A finite-state method, based on leftmost longest-match replacement, is presented for segmenting words into graphemes, and for converting graphemes into phonemes. A small set of hand-crafted conversion rules for Dutch achieves a phoneme accuracy of over 93%. The accuracy of the system is further improved by using transformation-based learning. The phoneme accuracy of the best system (using a large set of rule templates and a `lazy' variant of Brill's algoritm), trained on only 40K words, reaches 99% accuracy.<|reference_end|>
arxiv
@article{bouma2000a, title={A Finite State and Data-Oriented Method for Grapheme to Phoneme Conversion}, author={Gosse Bouma}, journal={Proceedings of NAACL-2000, Seattle, WA}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003074}, primaryClass={cs.CL} }
bouma2000a
arxiv-669475
cs/0003075
On the theory of system administration
<|reference_start|>On the theory of system administration: This paper describes necessary elements for constructing theoretical models of network and system administration. Armed with a theoretical model it becomes possible to determine best practices and optimal strategies in a way which objectively relates policies and assumptions to results obtained. It is concluded that a mixture of automation and human, or other intelligent incursion is required to fully implement system policy with current technology. Some aspects of the author's immunity model for automated system administration are explained, as an example. A theoretical framework makes the prediction that the optimal balance between resource availability and garbage collection strategies is encompassed by the immunity model.<|reference_end|>
arxiv
@article{burgess2000on, title={On the theory of system administration}, author={Mark Burgess}, journal={arXiv preprint arXiv:cs/0003075}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003075}, primaryClass={cs.OH} }
burgess2000on
arxiv-669476
cs/0003076
Constraint Programming viewed as Rule-based Programming
<|reference_start|>Constraint Programming viewed as Rule-based Programming: We study here a natural situation when constraint programming can be entirely reduced to rule-based programming. To this end we explain first how one can compute on constraint satisfaction problems using rules represented by simple first-order formulas. Then we consider constraint satisfaction problems that are based on predefined, explicitly given constraints. To solve them we first derive rules from these explicitly given constraints and limit the computation process to a repeated application of these rules, combined with labeling.We consider here two types of rules. The first type, that we call equality rules, leads to a new notion of local consistency, called {\em rule consistency} that turns out to be weaker than arc consistency for constraints of arbitrary arity (called hyper-arc consistency in \cite{MS98b}). For Boolean constraints rule consistency coincides with the closure under the well-known propagation rules for Boolean constraints. The second type of rules, that we call membership rules, yields a rule-based characterization of arc consistency. To show feasibility of this rule-based approach to constraint programming we show how both types of rules can be automatically generated, as {\tt CHR} rules of \cite{fruhwirth-constraint-95}. This yields an implementation of this approach to programming by means of constraint logic programming. We illustrate the usefulness of this approach to constraint programming by discussing various examples, including Boolean constraints, two typical examples of many valued logics, constraints dealing with Waltz's language for describing polyhedral scenes, and Allen's qualitative approach to temporal logic.<|reference_end|>
arxiv
@article{apt2000constraint, title={Constraint Programming viewed as Rule-based Programming}, author={Krzysztof R. Apt and Eric Monfroy}, journal={arXiv preprint arXiv:cs/0003076}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003076}, primaryClass={cs.AI cs.PL} }
apt2000constraint
arxiv-669477
cs/0003077
DATALOG with constraints - an answer-set programming system
<|reference_start|>DATALOG with constraints - an answer-set programming system: Answer-set programming (ASP) has emerged recently as a viable programming paradigm well attuned to search problems in AI, constraint satisfaction and combinatorics. Propositional logic is, arguably, the simplest ASP system with an intuitive semantics supporting direct modeling of problem constraints. However, for some applications, especially those requiring that transitive closure be computed, it requires additional variables and results in large theories. Consequently, it may not be a practical computational tool for such problems. On the other hand, ASP systems based on nonmonotonic logics, such as stable logic programming, can handle transitive closure computation efficiently and, in general, yield very concise theories as problem representations. Their semantics is, however, more complex. Searching for the middle ground, in this paper we introduce a new nonmonotonic logic, DATALOG with constraints or DC. Informally, DC theories consist of propositional clauses (constraints) and of Horn rules. The semantics is a simple and natural extension of the semantics of the propositional logic. However, thanks to the presence of Horn rules in the system, modeling of transitive closure becomes straightforward. We describe the syntax and semantics of DC, and study its properties. We discuss an implementation of DC and present results of experimental study of the effectiveness of DC, comparing it with CSAT, a satisfiability checker and SMODELS implementation of stable logic programming. Our results show that DC is competitive with the other two approaches, in case of many search problems, often yielding much more efficient solutions.<|reference_end|>
arxiv
@article{east2000datalog, title={DATALOG with constraints - an answer-set programming system}, author={Deborah East and Miroslaw Truszczynski}, journal={arXiv preprint arXiv:cs/0003077}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003077}, primaryClass={cs.AI} }
east2000datalog
arxiv-669478
cs/0003078
About the finding of independent vertices of a graph
<|reference_start|>About the finding of independent vertices of a graph: We examine the Maximum Independent Set Problem in an undirected graph. The main result is that this problem can be considered as the solving the same problem in a subclass of the weighted normal twin-orthogonal graphs. The problem is formulated which is dual to the problem above. It is shown that, for trivial twin-orthogonal graphs, any of its maximal independent set is also maximum one.<|reference_end|>
arxiv
@article{plotnikov2000about, title={About the finding of independent vertices of a graph}, author={Anatoly D. Plotnikov}, journal={About the finding of independent vertices of a graph, Journal "Kibernetika", No. 1, 1989, p. 119 - 121}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003078}, primaryClass={cs.DS} }
plotnikov2000about
arxiv-669479
cs/0003079
Differential Invariants under Gamma Correction
<|reference_start|>Differential Invariants under Gamma Correction: This paper presents invariants under gamma correction and similarity transformations. The invariants are local features based on differentials which are implemented using derivatives of the Gaussian. The use of the proposed invariant representation is shown to yield improved correlation results in a template matching scenario.<|reference_end|>
arxiv
@article{siebert2000differential, title={Differential Invariants under Gamma Correction}, author={Andreas Siebert}, journal={Vision Interface 2000, Montreal, 2000}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003079}, primaryClass={cs.CV} }
siebert2000differential
arxiv-669480
cs/0003080
Some Remarks on Boolean Constraint Propagation
<|reference_start|>Some Remarks on Boolean Constraint Propagation: We study here the well-known propagation rules for Boolean constraints. First we propose a simple notion of completeness for sets of such rules and establish a completeness result. Then we show an equivalence in an appropriate sense between Boolean constraint propagation and unit propagation, a form of resolution for propositional logic. Subsequently we characterize one set of such rules by means of the notion of hyper-arc consistency introduced in (Mohr and Masini 1988). Also, we clarify the status of a similar, though different, set of rules introduced in (Simonis 1989a) and more fully in (Codognet and Diaz 1996).<|reference_end|>
arxiv
@article{apt2000some, title={Some Remarks on Boolean Constraint Propagation}, author={Krzysztof R. Apt}, journal={arXiv preprint arXiv:cs/0003080}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003080}, primaryClass={cs.AI} }
apt2000some
arxiv-669481
cs/0003081
Variable Word Rate N-grams
<|reference_start|>Variable Word Rate N-grams: The rate of occurrence of words is not uniform but varies from document to document. Despite this observation, parameters for conventional n-gram language models are usually derived using the assumption of a constant word rate. In this paper we investigate the use of variable word rate assumption, modelled by a Poisson distribution or a continuous mixture of Poissons. We present an approach to estimating the relative frequencies of words or n-grams taking prior information of their occurrences into account. Discounting and smoothing schemes are also considered. Using the Broadcast News task, the approach demonstrates a reduction of perplexity up to 10%.<|reference_end|>
arxiv
@article{gotoh2000variable, title={Variable Word Rate N-grams}, author={Yoshihiko Gotoh and Steve Renals}, journal={arXiv preprint arXiv:cs/0003081}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003081}, primaryClass={cs.CL} }
gotoh2000variable
arxiv-669482
cs/0003082
Representation results for defeasible logic
<|reference_start|>Representation results for defeasible logic: The importance of transformations and normal forms in logic programming, and generally in computer science, is well documented. This paper investigates transformations and normal forms in the context of Defeasible Logic, a simple but efficient formalism for nonmonotonic reasoning based on rules and priorities. The transformations described in this paper have two main benefits: on one hand they can be used as a theoretical tool that leads to a deeper understanding of the formalism, and on the other hand they have been used in the development of an efficient implementation of defeasible logic.<|reference_end|>
arxiv
@article{antoniou2000representation, title={Representation results for defeasible logic}, author={G. Antoniou, D. Billington, G. Governatori and M.J. Maher}, journal={ACM Transactions on Computational Logic, 2 (2), 255--287, 2001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003082}, primaryClass={cs.LO cs.AI} }
antoniou2000representation
arxiv-669483
cs/0003083
Advances in domain independent linear text segmentation
<|reference_start|>Advances in domain independent linear text segmentation: This paper describes a method for linear text segmentation which is twice as accurate and over seven times as fast as the state-of-the-art (Reynar, 1998). Inter-sentence similarity is replaced by rank in the local context. Boundary locations are discovered by divisive clustering.<|reference_end|>
arxiv
@article{choi2000advances, title={Advances in domain independent linear text segmentation}, author={Freddy Y. Y. Choi (University of Manchester)}, journal={arXiv preprint arXiv:cs/0003083}, year={2000}, archivePrefix={arXiv}, eprint={cs/0003083}, primaryClass={cs.CL} }
choi2000advances
arxiv-669484
cs/0003084
Information Extraction from Broadcast News
<|reference_start|>Information Extraction from Broadcast News: This paper discusses the development of trainable statistical models for extracting content from television and radio news broadcasts. In particular we concentrate on statistical finite state models for identifying proper names and other named entities in broadcast speech. Two models are presented: the first represents name class information as a word attribute; the second represents both word-word and class-class transitions explicitly. A common n-gram based formulation is used for both models. The task of named entity identification is characterized by relatively sparse training data and issues related to smoothing are discussed. Experiments are reported using the DARPA/NIST Hub-4E evaluation for North American Broadcast News.<|reference_end|>
arxiv
@article{gotoh2000information, title={Information Extraction from Broadcast News}, author={Yoshihiko Gotoh and Steve Renals}, journal={arXiv preprint arXiv:cs/0003084}, year={2000}, doi={10.1098/rsta.2000.0587}, archivePrefix={arXiv}, eprint={cs/0003084}, primaryClass={cs.CL} }
gotoh2000information
arxiv-669485
cs/0004001
A Theory of Universal Artificial Intelligence based on Algorithmic Complexity
<|reference_start|>A Theory of Universal Artificial Intelligence based on Algorithmic Complexity: Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff's theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence. We give strong arguments that the resulting AIXI model is the most intelligent unbiased agent possible. We outline for a number of problem classes, including sequence prediction, strategic games, function minimization, reinforcement and supervised learning, how the AIXI model can formally solve them. The major drawback of the AIXI model is that it is uncomputable. To overcome this problem, we construct a modified algorithm AIXI-tl, which is still effectively more intelligent than any other time t and space l bounded agent. The computation time of AIXI-tl is of the order tx2^l. Other discussed topics are formal definitions of intelligence order relations, the horizon problem and relations of the AIXI theory to other AI approaches.<|reference_end|>
arxiv
@article{hutter2000a, title={A Theory of Universal Artificial Intelligence based on Algorithmic Complexity}, author={Marcus Hutter}, journal={arXiv preprint arXiv:cs/0004001}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004001}, primaryClass={cs.AI cs.IT cs.LG math.IT} }
hutter2000a
arxiv-669486
cs/0004002
Programming in Alma-0, or Imperative and Declarative Programming Reconciled
<|reference_start|>Programming in Alma-0, or Imperative and Declarative Programming Reconciled: In (Apt et al, TOPLAS 1998) we introduced the imperative programming language Alma-0 that supports declarative programming. In this paper we illustrate the hybrid programming style of Alma-0 by means of various examples that complement those presented in (Apt et al, TOPLAS 1998). The presented Alma-0 programs illustrate the versatility of the language and show that ``don't know'' nondeterminism can be naturally combined with assignment.<|reference_end|>
arxiv
@article{apt2000programming, title={Programming in Alma-0, or Imperative and Declarative Programming Reconciled}, author={Krzysztof R. Apt and Andrea Schaerf}, journal={Frontiers of Combining Systems 2, Research Studies Press Ltd, D. M. Gabbay and M. de Rijke (editors), pages 1-16, 1999}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004002}, primaryClass={cs.LO cs.AI cs.PL} }
apt2000programming
arxiv-669487
cs/0004003
Searching for Spaceships
<|reference_start|>Searching for Spaceships: We describe software that searches for spaceships in Conway's Game of Life and related two-dimensional cellular automata. Our program searches through a state space related to the de Bruijn graph of the automaton, using a method that combines features of breadth first and iterative deepening search, and includes fast bit-parallel graph reachability and path enumeration algorithms for finding the successors of each state. Successful results include a new 2c/7 spaceship in Life, found by searching a space with 2^126 states.<|reference_end|>
arxiv
@article{eppstein2000searching, title={Searching for Spaceships}, author={David Eppstein}, journal={More Games of No Chance, MSRI Publications 42, 2002, pp. 433-453}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004003}, primaryClass={cs.AI nlin.CG} }
eppstein2000searching
arxiv-669488
cs/0004004
Mathematical Software: Past, Present, and Future
<|reference_start|>Mathematical Software: Past, Present, and Future: This paper provides some reflections on the field of mathematical software on the occasion of John Rice's 65th birthday. I describe some of the common themes of research in this field and recall some significant events in its evolution. Finally, I raise a number of issues that are of concern to future developments.<|reference_end|>
arxiv
@article{boisvert2000mathematical, title={Mathematical Software: Past, Present, and Future}, author={Ronald F. Boisvert}, journal={arXiv preprint arXiv:cs/0004004}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004004}, primaryClass={cs.MS} }
boisvert2000mathematical
arxiv-669489
cs/0004005
Exact Phase Transitions in Random Constraint Satisfaction Problems
<|reference_start|>Exact Phase Transitions in Random Constraint Satisfaction Problems: In this paper we propose a new type of random CSP model, called Model RB, which is a revision to the standard Model B. It is proved that phase transitions from a region where almost all problems are satisfiable to a region where almost all problems are unsatisfiable do exist for Model RB as the number of variables approaches infinity. Moreover, the critical values at which the phase transitions occur are also known exactly. By relating the hardness of Model RB to Model B, it is shown that there exist a lot of hard instances in Model RB.<|reference_end|>
arxiv
@article{xu2000exact, title={Exact Phase Transitions in Random Constraint Satisfaction Problems}, author={Ke Xu, Wei Li}, journal={Journal of Artificial Intelligence Research, Vol 12, (2000), 93-103.}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004005}, primaryClass={cs.AI cs.CC cs.DM} }
xu2000exact
arxiv-669490
cs/0004006
On Redundancy Elimination Tolerant Scheduling Rules
<|reference_start|>On Redundancy Elimination Tolerant Scheduling Rules: In (Ferrucci, Pacini and Sessa, 1995) an extended form of resolution, called Reduced SLD resolution (RSLD), is introduced. In essence, an RSLD derivation is an SLD derivation such that redundancy elimination from resolvents is performed after each rewriting step. It is intuitive that redundancy elimination may have positive effects on derivation process. However, undesiderable effects are also possible. In particular, as shown in this paper, program termination as well as completeness of loop checking mechanisms via a given selection rule may be lost. The study of such effects has led us to an analysis of selection rule basic concepts, so that we have found convenient to move the attention from rules of atom selection to rules of atom scheduling. A priority mechanism for atom scheduling is built, where a priority is assigned to each atom in a resolvent, and primary importance is given to the event of arrival of new atoms from the body of the applied clause at rewriting time. This new computational model proves able to address the study of redundancy elimination effects, giving at the same time interesting insights into general properties of selection rules. As a matter of fact, a class of scheduling rules, namely the specialisation independent ones, is defined in the paper by using not trivial semantic arguments. As a quite surprising result, specialisation independent scheduling rules turn out to coincide with a class of rules which have an immediate structural characterisation (named stack-queue rules). Then we prove that such scheduling rules are tolerant to redundancy elimination, in the sense that neither program termination nor completeness of equality loop check is lost passing from SLD to RSLD.<|reference_end|>
arxiv
@article{ferrucci2000on, title={On Redundancy Elimination Tolerant Scheduling Rules}, author={F. Ferrucci, G. Pacini, M.I. Sessa}, journal={arXiv preprint arXiv:cs/0004006}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004006}, primaryClass={cs.PL} }
ferrucci2000on
arxiv-669491
cs/0004007
Deciding first-order properties of locally tree-decomposable structures
<|reference_start|>Deciding first-order properties of locally tree-decomposable structures: We introduce the concept of a class of graphs, or more generally, relational structures, being locally tree-decomposable. There are numerous examples of locally tree-decomposable classes, among them the class of planar graphs and all classes of bounded valence or of bounded tree-width. We also consider a slightly more general concept of a class of structures having bounded local tree-width. We show that for each property P of structures that is definable in first-order logic and for each locally tree-decomposable class C of graphs, there is a linear time algorithm deciding whether a given structure A in C has property P. For classes C of bounded local tree-width, we show that for every k\ge 1 there is an algorithm that solves the same problem in time O(n^{1+(1/k)}) (where n is the cardinality of the input structure).<|reference_end|>
arxiv
@article{frick2000deciding, title={Deciding first-order properties of locally tree-decomposable structures}, author={Markus Frick and Martin Grohe}, journal={arXiv preprint arXiv:cs/0004007}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004007}, primaryClass={cs.DS cs.CC cs.DB} }
frick2000deciding
arxiv-669492
cs/0004008
How to Evaluate your Question Answering System Every Day and Still Get Real Work Done
<|reference_start|>How to Evaluate your Question Answering System Every Day and Still Get Real Work Done: In this paper, we report on Qaviar, an experimental automated evaluation system for question answering applications. The goal of our research was to find an automatically calculated measure that correlates well with human judges' assessment of answer correctness in the context of question answering tasks. Qaviar judges the response by computing recall against the stemmed content words in the human-generated answer key. It counts the answer correct if it exceeds agiven recall threshold. We determined that the answer correctness predicted by Qaviar agreed with the human 93% to 95% of the time. 41 question-answering systems were ranked by both Qaviar and human assessors, and these rankings correlated with a Kendall's Tau measure of 0.920, compared to a correlation of 0.956 between human assessors on the same data.<|reference_end|>
arxiv
@article{breck2000how, title={How to Evaluate your Question Answering System Every Day and Still Get Real Work Done}, author={Eric Breck (1), John D. Burger (1), Lisa Ferro (1), Lynette Hirschman (1), David House (1), Marc Light (1), Inderjeet Mani (1) ((1) The MITRE Corporation)}, journal={arXiv preprint arXiv:cs/0004008}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004008}, primaryClass={cs.CL cs.IR} }
breck2000how
arxiv-669493
cs/0004009
Separating the complexity classes NL and NP
<|reference_start|>Separating the complexity classes NL and NP: Withdrawn since -order- was overlooked. First order reductions without order are much too weak to separate.<|reference_end|>
arxiv
@article{benson2000separating, title={Separating the complexity classes NL and NP}, author={David B. Benson}, journal={arXiv preprint arXiv:cs/0004009}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004009}, primaryClass={cs.CC} }
benson2000separating
arxiv-669494
cs/0004010
Design and Evaluation of Mechanisms for a Multicomputer Object Store
<|reference_start|>Design and Evaluation of Mechanisms for a Multicomputer Object Store: Multicomputers have traditionally been viewed as powerful compute engines. It is from this perspective that they have been applied to various problems in order to achieve significant performance gains. There are many applications for which this compute intensive approach is only a partial solution. CAD, virtual reality, simulation, document management and analysis all require timely access to large amounts of data. This thesis investigates the use of the object store paradigm to harness the large distributed memories found on multicomputers. The design, implementation, and evaluation of a distributed object server on the Fujitsu AP1000 is described. The performance of the distributed object server under example applications, mainly physical simulation problems, is used to evaluate solutions to the problems of client space recovery, object migration, and coherence maintenance. The distributed object server follows the client-server model, allows object replication, and uses binary semaphores as a concurrency control measure. Instrumentation of the server under these applications supports several conclusions: client space recovery should be dynamically controlled by the application, predictively prefetching object replicas yields benefits in restricted circumstances, object migration by storage unit (segment) is not generally suitable where there are many objects per storage unit, and binary semaphores are an expensive concurrency control measure in this environment.<|reference_end|>
arxiv
@article{weaver2000design, title={Design and Evaluation of Mechanisms for a Multicomputer Object Store}, author={Lex Weaver}, journal={arXiv preprint arXiv:cs/0004010}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004010}, primaryClass={cs.DC cs.DB} }
weaver2000design
arxiv-669495
cs/0004011
Task Frames
<|reference_start|>Task Frames: Forty years ago Dijkstra introduced the current conventional execution of routines. It places activation frames onto a stack. Each frame is the internal state of an executing routine. The resulting application execution is not easily helped by an external system. This presentation proposes an alternative execution of routines. It places task frames onto the stack. A task frame is the call of a routine to be executed. The feasibility of the alternative execution is demonstrated by a crude implementation. As described elsewhere, an application which executes in terms of tasks can be provided by an external system with a transparent reliable, distributed, heterogeneous, adaptive, dynamic, real-time, parallel, secure or other execution. By extending the crude implementation, this presentation outlines a simple transparent parallel execution.<|reference_end|>
arxiv
@article{steinmacher-burow2000task, title={Task Frames}, author={Burkhard D. Steinmacher-Burow}, journal={arXiv preprint arXiv:cs/0004011}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004011}, primaryClass={cs.PL} }
steinmacher-burow2000task
arxiv-669496
cs/0004012
Assisted Video Sequences Indexing : Motion Analysis Based on Interest Points
<|reference_start|>Assisted Video Sequences Indexing : Motion Analysis Based on Interest Points: This work deals with content-based video indexing. Our viewpoint is semi-automatic analysis of compressed video. We consider the possible applications of motion analysis and moving object detection : assisting moving object indexing, summarising videos, and allowing image and motion queries. We propose an approach based on interest points. As first results, we test and compare the stability of different types of interest point detectors in compressed sequences.<|reference_end|>
arxiv
@article{etievent2000assisted, title={Assisted Video Sequences Indexing : Motion Analysis Based on Interest Points}, author={Emmanuel Etievent, Frank Lebourgeois, Jean-Michel Jolion}, journal={Iciap 99, Venezia, 27-29 sept., 1059-1062}, year={2000}, number={RR9903}, archivePrefix={arXiv}, eprint={cs/0004012}, primaryClass={cs.CV} }
etievent2000assisted
arxiv-669497
cs/0004013
Sorting Integers on the AP1000
<|reference_start|>Sorting Integers on the AP1000: Sorting is one of the classic problems of computer science. Whilst well understood on sequential machines, the diversity of architectures amongst parallel systems means that algorithms do not perform uniformly on all platforms. This document describes the implementation of a radix based algorithm for sorting positive integers on a Fujitsu AP1000 Supercomputer, which was constructed as an entry in the Joint Symposium on Parallel Processing (JSPP) 1994 Parallel Software Contest (PSC94). Brief consideration is also given to a full radix sort conducted in parallel across the machine.<|reference_end|>
arxiv
@article{weaver2000sorting, title={Sorting Integers on the AP1000}, author={Lex Weaver and Andrew Lynes}, journal={arXiv preprint arXiv:cs/0004013}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004013}, primaryClass={cs.DC} }
weaver2000sorting
arxiv-669498
cs/0004014
Cluster Computing White Paper
<|reference_start|>Cluster Computing White Paper: Cluster computing is not a new area of computing. It is, however, evident that there is a growing interest in its usage in all areas where applications have traditionally used parallel or distributed computing platforms. The growing interest has been fuelled in part by the availability of powerful microprocessors and high-speed networks as off-the-shelf commodity components as well as in part by the rapidly maturing software components available to support high performance and high availability applications. This White Paper has been broken down into eleven sections, each of which has been put together by academics and industrial researchers who are both experts in their fields and where willing to volunteer their time and effort to put together this White Paper. The status of this paper is draft and we are at the stage of publicizing its presence and making a Request For Comments (RFC).<|reference_end|>
arxiv
@article{baker2000cluster, title={Cluster Computing White Paper}, author={Mark Baker (University of Portsmouth, UK), et al}, journal={arXiv preprint arXiv:cs/0004014}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004014}, primaryClass={cs.DC cs.AR cs.NI} }
baker2000cluster
arxiv-669499
cs/0004015
Introduction to the GiNaC Framework for Symbolic Computation within the C++ Programming Language
<|reference_start|>Introduction to the GiNaC Framework for Symbolic Computation within the C++ Programming Language: The traditional split-up into a low level language and a high level language in the design of computer algebra systems may become obsolete with the advent of more versatile computer languages. We describe GiNaC, a special-purpose system that deliberately denies the need for such a distinction. It is entirely written in C++ and the user can interact with it directly in that language. It was designed to provide efficient handling of multivariate polynomials, algebras and special functions that are needed for loop calculations in theoretical quantum field theory. It also bears some potential to become a more general purpose symbolic package.<|reference_end|>
arxiv
@article{bauer2000introduction, title={Introduction to the GiNaC Framework for Symbolic Computation within the C++ Programming Language}, author={Christian Bauer, Alexander Frink, Richard Kreckel}, journal={J. Symbolic Computation (2002) 33, 1-12}, year={2000}, number={MZ-TH/00-17}, archivePrefix={arXiv}, eprint={cs/0004015}, primaryClass={cs.SC hep-ph physics.comp-ph} }
bauer2000introduction
arxiv-669500
cs/0004016
Looking at discourse in a corpus: The role of lexical cohesion
<|reference_start|>Looking at discourse in a corpus: The role of lexical cohesion: This paper is aimed at reporting on the development and application of a computer model for discourse analysis through segmentation. Segmentation refers to the principled division of texts into contiguous constituents. Other studies have looked at the application of a number of models to the analysis of discourse by computer. The segmentation procedure developed for the present investigation is called LSM ('Link Set Median'). It was applied to three corpus of 300 texts from three different genres. The results obtained by application of the LSM procedure on the corpus were then compared to segmentation carried out at random. Statistical analyses suggested that LSM significantly outperformed random segmentation, thus indicating that the segmentation was meaningful.<|reference_end|>
arxiv
@article{sardinha2000looking, title={Looking at discourse in a corpus: The role of lexical cohesion}, author={Tony Berber Sardinha (Catholic U. of Sao Paulo, Brazil)}, journal={arXiv preprint arXiv:cs/0004016}, year={2000}, archivePrefix={arXiv}, eprint={cs/0004016}, primaryClass={cs.CL} }
sardinha2000looking