id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
cs/0105022
Li Min Fu
Li Min Fu
Multi-Channel Parallel Adaptation Theory for Rule Discovery
21 pages, 1 figure, 7 tables
null
null
null
cs.AI
null
In this paper, we introduce a new machine learning theory based on multi-channel parallel adaptation for rule discovery. This theory is distinguished from the familiar parallel-distributed adaptation theory of neural networks in terms of channel-based convergence to the target rules. We show how to realize this theory in a learning system named CFRule. CFRule is a parallel weight-based model, but it departs from traditional neural computing in that its internal knowledge is comprehensible. Furthermore, when the model converges upon training, each channel converges to a target rule. The model adaptation rule is derived by multi-level parallel weight optimization based on gradient descent. Since, however, gradient descent only guarantees local optimization, a multi-channel regression-based optimization strategy is developed to effectively deal with this problem. Formally, we prove that the CFRule model can explicitly and precisely encode any given rule set. Also, we prove a property related to asynchronous parallel convergence, which is a critical element of the multi-channel parallel adaptation theory for rule learning. Thanks to the quantizability nature of the CFRule model, rules can be extracted completely and soundly via a threshold-based mechanism. Finally, the practical application of the theory is demonstrated in DNA promoter recognition and hepatitis prognosis prediction.
[ { "version": "v1", "created": "Fri, 11 May 2001 14:17:42 GMT" } ]
1,179,878,400,000
[ [ "Fu", "Li Min", "" ] ]
cs/0106006
Aspassia Daskalopulu
Aspassia Daskalopulu, Marek Sergot
A Constraint-Driven System for Contract Assembly
null
Proc. 5th International Conference on Artificial Intelligence and Law, ACM Press, pp. 62-69, 1995
null
null
cs.AI
null
We present an approach for modelling the structure and coarse content of legal documents with a view to providing automated support for the drafting of contracts and contract database retrieval. The approach is designed to be applicable where contract drafting is based on model-form contracts or on existing examples of a similar type. The main features of the approach are: (1) the representation addresses the structure and the interrelationships between the constituent parts of contracts, but not the text of the document itself; (2) the representation of documents is separated from the mechanisms that manipulate it; and (3) the drafting process is subject to a collection of explicitly stated constraints that govern the structure of the documents. We describe the representation of document instances and of 'generic documents', which are data structures used to drive the creation of new document instances, and we show extracts from a sample session to illustrate the features of a prototype system implemented in MacProlog.
[ { "version": "v1", "created": "Thu, 7 Jun 2001 14:27:30 GMT" } ]
1,179,878,400,000
[ [ "Daskalopulu", "Aspassia", "" ], [ "Sergot", "Marek", "" ] ]
cs/0106007
Aspassia Daskalopulu
Chris Reed, Aspassia Daskalopulu
Modelling Contractual Arguments
null
Proceedings of the 4th International Conference on Argumentation, SICSAT, pp. 686-692, 1998
null
null
cs.AI
null
One influential approach to assessing the "goodness" of arguments is offered by the Pragma-Dialectical school (p-d) (Eemeren & Grootendorst 1992). This can be compared with Rhetorical Structure Theory (RST) (Mann & Thompson 1988), an approach that originates in discourse analysis. In p-d terms an argument is good if it avoids committing a fallacy, whereas in RST terms an argument is good if it is coherent. RST has been criticised (Snoeck Henkemans 1997) for providing only a partially functional account of argument, and similar criticisms have been raised in the Natural Language Generation (NLG) community-particularly by Moore & Pollack (1992)- with regards to its account of intentionality in text in general. Mann and Thompson themselves note that although RST can be successfully applied to a wide range of texts from diverse domains, it fails to characterise some types of text, most notably legal contracts. There is ongoing research in the Artificial Intelligence and Law community exploring the potential for providing electronic support to contract negotiators, focusing on long-term, complex engineering agreements (see for example Daskalopulu & Sergot 1997). This paper provides a brief introduction to RST and illustrates its shortcomings with respect to contractual text. An alternative approach for modelling argument structure is presented which not only caters for contractual text, but also overcomes the aforementioned limitations of RST.
[ { "version": "v1", "created": "Thu, 7 Jun 2001 14:40:04 GMT" } ]
1,179,878,400,000
[ [ "Reed", "Chris", "" ], [ "Daskalopulu", "Aspassia", "" ] ]
cs/0106025
Antonis C. Kakas
Yannis Dimopoulos and Antonis Kakas
Information Integration and Computational Logic
53 Pages
null
null
null
cs.AI
null
Information Integration is a young and exciting field with enormous research and commercial significance in the new world of the Information Society. It stands at the crossroad of Databases and Artificial Intelligence requiring novel techniques that bring together different methods from these fields. Information from disparate heterogeneous sources often with no a-priori common schema needs to be synthesized in a flexible, transparent and intelligent way in order to respond to the demands of a query thus enabling a more informed decision by the user or application program. The field although relatively young has already found many practical applications particularly for integrating information over the World Wide Web. This paper gives a brief introduction of the field highlighting some of the main current and future research issues and application areas. It attempts to evaluate the current and potential role of Computational Logic in this and suggests some of the problems where logic-based techniques could be used.
[ { "version": "v1", "created": "Mon, 11 Jun 2001 20:00:04 GMT" } ]
1,179,878,400,000
[ [ "Dimopoulos", "Yannis", "" ], [ "Kakas", "Antonis", "" ] ]
cs/0107002
Laurent Granvilliers
Laurent Granvilliers and Eric Monfroy
Enhancing Constraint Propagation with Composition Operators
14 pages
null
null
null
cs.AI
null
Constraint propagation is a general algorithmic approach for pruning the search space of a CSP. In a uniform way, K. R. Apt has defined a computation as an iteration of reduction functions over a domain. He has also demonstrated the need for integrating static properties of reduction functions (commutativity and semi-commutativity) to design specialized algorithms such as AC3 and DAC. We introduce here a set of operators for modeling compositions of reduction functions. Two of the major goals are to tackle parallel computations, and dynamic behaviours (such as slow convergence).
[ { "version": "v1", "created": "Mon, 2 Jul 2001 08:08:39 GMT" } ]
1,179,878,400,000
[ [ "Granvilliers", "Laurent", "" ], [ "Monfroy", "Eric", "" ] ]
cs/0109006
Giuliana Sabbatini
T. Eiter, M. Fink, G. Sabbatini and H. Tompits
On Properties of Update Sequences Based on Causal Rejection
59 pages, 2 figures, 3 tables, to be published in "Theory and Practice of Logic Programming"
null
null
null
cs.AI
null
We consider an approach to update nonmonotonic knowledge bases represented as extended logic programs under answer set semantics. New information is incorporated into the current knowledge base subject to a causal rejection principle enforcing that, in case of conflicts, more recent rules are preferred and older rules are overridden. Such a rejection principle is also exploited in other approaches to update logic programs, e.g., in dynamic logic programming by Alferes et al. We give a thorough analysis of properties of our approach, to get a better understanding of the causal rejection principle. We review postulates for update and revision operators from the area of theory change and nonmonotonic reasoning, and some new properties are considered as well. We then consider refinements of our semantics which incorporate a notion of minimality of change. As well, we investigate the relationship to other approaches, showing that our approach is semantically equivalent to inheritance programs by Buccafurri et al. and that it coincides with certain classes of dynamic logic programs, for which we provide characterizations in terms of graph conditions. Therefore, most of our results about properties of causal rejection principle apply to these approaches as well. Finally, we deal with computational complexity of our approach, and outline how the update semantics and its refinements can be implemented on top of existing logic programming engines.
[ { "version": "v1", "created": "Wed, 5 Sep 2001 09:19:34 GMT" } ]
1,179,878,400,000
[ [ "Eiter", "T.", "" ], [ "Fink", "M.", "" ], [ "Sabbatini", "G.", "" ], [ "Tompits", "H.", "" ] ]
cs/0111060
Ivo Kwee
Ivo Kwee, Marcus Hutter, Juergen Schmidhuber
Gradient-based Reinforcement Planning in Policy-Search Methods
This is an extended version of the paper presented at the EWRL 2001 in Utrecht (The Netherlands)
null
null
14-01
cs.AI
null
We introduce a learning method called ``gradient-based reinforcement planning'' (GREP). Unlike traditional DP methods that improve their policy backwards in time, GREP is a gradient-based method that plans ahead and improves its policy before it actually acts in the environment. We derive formulas for the exact policy gradient that maximizes the expected future reward and confirm our ideas with numerical experiments.
[ { "version": "v1", "created": "Wed, 28 Nov 2001 13:43:13 GMT" } ]
1,179,878,400,000
[ [ "Kwee", "Ivo", "" ], [ "Hutter", "Marcus", "" ], [ "Schmidhuber", "Juergen", "" ] ]
cs/0112015
Moshe Tennenholtz
Moshe Tennenholtz
Rational Competitive Analysis
null
Proceedings of IJCAI-2001
null
null
cs.AI
null
Much work in computer science has adopted competitive analysis as a tool for decision making under uncertainty. In this work we extend competitive analysis to the context of multi-agent systems. Unlike classical competitive analysis where the behavior of an agent's environment is taken to be arbitrary, we consider the case where an agent's environment consists of other agents. These agents will usually obey some (minimal) rationality constraints. This leads to the definition of rational competitive analysis. We introduce the concept of rational competitive analysis, and initiate the study of competitive analysis for multi-agent systems. We also discuss the application of rational competitive analysis to the context of bidding games, as well as to the classical one-way trading problem.
[ { "version": "v1", "created": "Thu, 13 Dec 2001 00:46:10 GMT" } ]
1,179,878,400,000
[ [ "Tennenholtz", "Moshe", "" ] ]
cs/0201022
Pierre Albarede
Pierre Albarede
A theory of experiment
19 pages LaTeX article (uses some pstricks) thorough revision 2 see also http://pierre.albarede.free.fr
null
null
null
cs.AI
null
This article aims at clarifying the language and practice of scientific experiment, mainly by hooking observability on calculability.
[ { "version": "v1", "created": "Wed, 23 Jan 2002 19:53:08 GMT" }, { "version": "v2", "created": "Sun, 23 Feb 2003 20:13:51 GMT" } ]
1,179,878,400,000
[ [ "Albarede", "Pierre", "" ] ]
cs/0202021
Daniel Lehmann
Sarit Kraus, Daniel Lehmann and Menachem Magidor
Nonmonotonic Reasoning, Preferential Models and Cumulative Logics
Presented at JELIA, June 1988. Some misprints in the Journal paper have been corrected
Journal of Artificial Intelligence, Vol. 44 Nos. 1-2 (July 1990) pp. 167-207
null
Leibniz Center for Research in Computer Science TR-88-15
cs.AI
null
Many systems that exhibit nonmonotonic behavior have been described and studied already in the literature. The general notion of nonmonotonic reasoning, though, has almost always been described only negatively, by the property it does not enjoy, i.e. monotonicity. We study here general patterns of nonmonotonic reasoning and try to isolate properties that could help us map the field of nonmonotonic reasoning by reference to positive properties. We concentrate on a number of families of nonmonotonic consequence relations, defined in the style of Gentzen. Both proof-theoretic and semantic points of view are developed in parallel. The former point of view was pioneered by D. Gabbay, while the latter has been advocated by Y. Shoham in. Five such families are defined and characterized by representation theorems, relating the two points of view. One of the families of interest, that of preferential relations, turns out to have been studied by E. Adams. The "preferential" models proposed here are a much stronger tool than Adams' probabilistic semantics. The basic language used in this paper is that of propositional logic. The extension of our results to first order predicate calculi and the study of the computational complexity of the decision problems described in this paper will be treated in another paper.
[ { "version": "v1", "created": "Mon, 18 Feb 2002 10:29:54 GMT" } ]
1,179,878,400,000
[ [ "Kraus", "Sarit", "" ], [ "Lehmann", "Daniel", "" ], [ "Magidor", "Menachem", "" ] ]
cs/0202022
Daniel Lehmann
Daniel Lehmann and Menachem Magidor
What does a conditional knowledge base entail?
Preliminary version presented at KR'89. Minor corrections of the Journal Version
Journal of Artificial Intelligence, Vol. 55 no.1 (May 1992) pp. 1-60. Erratum in Vol. 68 (1994) p. 411
null
Leibniz Center for Research in Computer Science TR-88-16 and TR-90-10
cs.AI
null
This paper presents a logical approach to nonmonotonic reasoning based on the notion of a nonmonotonic consequence relation. A conditional knowledge base, consisting of a set of conditional assertions of the type "if ... then ...", represents the explicit defeasible knowledge an agent has about the way the world generally behaves. We look for a plausible definition of the set of all conditional assertions entailed by a conditional knowledge base. In a previous paper, S. Kraus and the authors defined and studied "preferential" consequence relations. They noticed that not all preferential relations could be considered as reasonable inference procedures. This paper studies a more restricted class of consequence relations, "rational" relations. It is argued that any reasonable nonmonotonic inference procedure should define a rational relation. It is shown that the rational relations are exactly those that may be represented by a "ranked" preferential model, or by a (non-standard) probabilistic model. The rational closure of a conditional knowledge base is defined and shown to provide an attractive answer to the question of the title. Global properties of this closure operation are proved: it is a cumulative operation. It is also computationally tractable. This paper assumes the underlying language is propositional.
[ { "version": "v1", "created": "Mon, 18 Feb 2002 12:43:18 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ], [ "Magidor", "Menachem", "" ] ]
cs/0202024
Daniel Lehmann
Daniel Lehmann
A note on Darwiche and Pearl
A small unpublished remark on a paper by Darwiche and Pearl
null
null
null
cs.AI
null
It is shown that Darwiche and Pearl's postulates imply an interesting property, not noticed by the authors.
[ { "version": "v1", "created": "Mon, 18 Feb 2002 15:23:06 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ] ]
cs/0202025
Daniel Lehmann
Daniel Lehmann, Menachem Magidor and Karl Schlechta
Distance Semantics for Belief Revision
Preliminary version presented at TARK '96
Journal of Symbolic Logic, Vol. 66 No.1 (March 2001) pp. 295-317
null
Leibniz Center for Research in Computer Science TR-98-10
cs.AI
null
A vast and interesting family of natural semantics for belief revision is defined. Suppose one is given a distance d between any two models. One may then define the revision of a theory K by a formula a as the theory defined by the set of all those models of a that are closest, by d, to the set of models of K. This family is characterized by a set of rationality postulates that extends the AGM postulates. The new postulates describe properties of iterated revisions.
[ { "version": "v1", "created": "Mon, 18 Feb 2002 15:36:46 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ], [ "Magidor", "Menachem", "" ], [ "Schlechta", "Karl", "" ] ]
cs/0202026
Daniel Lehmann
Shai Berger, Daniel Lehmann and Karl Schlechta
Preferred History Semantics for Iterated Updates
null
Journal of Logic and Computation, Vol. 9 no. 6 (1999) pp. 817-833
null
Leibniz Center for Research in Computer SCience TR-98-11 (July 1998)
cs.AI
null
We give a semantics to iterated update by a preference relation on possible developments. An iterated update is a sequence of formulas, giving (incomplete) information about successive states of the world. A development is a sequence of models, describing a possible trajectory through time. We assume a principle of inertia and prefer those developments, which are compatible with the information, and avoid unnecessary changes. The logical properties of the updates defined in this way are considered, and a representation result is proved.
[ { "version": "v1", "created": "Mon, 18 Feb 2002 15:53:56 GMT" } ]
1,179,878,400,000
[ [ "Berger", "Shai", "" ], [ "Lehmann", "Daniel", "" ], [ "Schlechta", "Karl", "" ] ]
cs/0202031
Daniel Lehmann
Michael Freund and Daniel Lehmann
Nonmonotonic inference operations
54 pages. A short version appeared in Studia Logica, Vol. 53 no. 2 (1994) pp. 161-201
Bulletin of the IGPL, Vol. 1 no. 1 (July 1993), pp. 23-68
null
Leibniz Center for Research in Computer Science: TR-92-2
cs.AI
null
A. Tarski proposed the study of infinitary consequence operations as the central topic of mathematical logic. He considered monotonicity to be a property of all such operations. In this paper, we weaken the monotonicity requirement and consider more general operations, inference operations. These operations describe the nonmonotonic logics both humans and machines seem to be using when infering defeasible information from incomplete knowledge. We single out a number of interesting families of inference operations. This study of infinitary inference operations is inspired by the results of Kraus, Lehmann and Magidor on finitary nonmonotonic operations, but this paper is self-contained.
[ { "version": "v1", "created": "Wed, 20 Feb 2002 14:19:42 GMT" } ]
1,179,878,400,000
[ [ "Freund", "Michael", "" ], [ "Lehmann", "Daniel", "" ] ]
cs/0202033
Daniel Lehmann
Daniel Lehmann
The logical meaning of Expansion
9 pages. Unpublished
null
null
null
cs.AI
null
The Expansion property considered by researchers in Social Choice is shown to correspond to a logical property of nonmonotonic consequence relations that is the {\em pure}, i.e., not involving connectives, version of a previously known weak rationality condition. The assumption that the union of two definable sets of models is definable is needed for the soundness part of the result.
[ { "version": "v1", "created": "Wed, 20 Feb 2002 14:50:49 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ] ]
cs/0203002
Daniel Lehmann
Daniel Lehmann
Another perspective on Default Reasoning
Presented at Workshop on Logical Formalizations of Commense Sense, Austin (Texas), January 1993
Annals of Mathematics and Artificial Intelligence, 15(1) (1995) pp. 61-82
null
Leibniz Center for Research in Computer Science TR-92-12, July 1992
cs.AI
null
The lexicographic closure of any given finite set D of normal defaults is defined. A conditional assertion "if a then b" is in this lexicographic closure if, given the defaults D and the fact a, one would conclude b. The lexicographic closure is essentially a rational extension of D, and of its rational closure, defined in a previous paper. It provides a logic of normal defaults that is different from the one proposed by R. Reiter and that is rich enough not to require the consideration of non-normal defaults. A large number of examples are provided to show that the lexicographic closure corresponds to the basic intuitions behind Reiter's logic of defaults.
[ { "version": "v1", "created": "Fri, 1 Mar 2002 11:06:49 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ] ]
cs/0203003
Daniel Lehmann
Yuri Kaluzhny and Daniel Lehmann
Deductive Nonmonotonic Inference Operations: Antitonic Representations
null
Journal of Logic and Computation, 5(1) (1995) pp. 111-122
null
Leibniz Center for Research in Computer Science TR-94-3, March 1994
cs.AI
null
We provide a characterization of those nonmonotonic inference operations C for which C(X) may be described as the set of all logical consequences of X together with some set of additional assumptions S(X) that depends anti-monotonically on X (i.e., X is a subset of Y implies that S(Y) is a subset of S(X)). The operations represented are exactly characterized in terms of properties most of which have been studied in Freund-Lehmann(cs.AI/0202031). Similar characterizations of right-absorbing and cumulative operations are also provided. For cumulative operations, our results fit in closely with those of Freund. We then discuss extending finitary operations to infinitary operations in a canonical way and discuss co-compactness properties. Our results provide a satisfactory notion of pseudo-compactness, generalizing to deductive nonmonotonic operations the notion of compactness for monotonic operations. They also provide an alternative, more elegant and more general, proof of the existence of an infinitary deductive extension for any finitary deductive operation (Theorem 7.9 of Freund-Lehmann).
[ { "version": "v1", "created": "Fri, 1 Mar 2002 11:20:59 GMT" } ]
1,179,878,400,000
[ [ "Kaluzhny", "Yuri", "" ], [ "Lehmann", "Daniel", "" ] ]
cs/0203004
Daniel Lehmann
Daniel Lehmann
Stereotypical Reasoning: Logical Properties
Presented at Fourth Workshop on Logic, Language, Information and Computation, Fortaleza (Brasil), August 1997
Logic Journal of the Interest Group in Pure and Applied Logics (IGPL), 6(1) (1998) pp. 49-58
null
Leibniz Center for Research in Computer Science TR-97-10
cs.AI
null
Stereotypical reasoning assumes that the situation at hand is one of a kind and that it enjoys the properties generally associated with that kind of situation. It is one of the most basic forms of nonmonotonic reasoning. A formal model for stereotypical reasoning is proposed and the logical properties of this form of reasoning are studied. Stereotypical reasoning is shown to be cumulative under weak assumptions.
[ { "version": "v1", "created": "Mon, 4 Mar 2002 08:57:54 GMT" } ]
1,179,878,400,000
[ [ "Lehmann", "Daniel", "" ] ]
cs/0203005
Torsten Schaub
J. P. Delgrande, T. Schaub, and H. Tompits
A Framework for Compiling Preferences in Logic Programs
To appear in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
We introduce a methodology and framework for expressing general preference information in logic programming under the answer set semantics. An ordered logic program is an extended logic program in which rules are named by unique terms, and in which preferences among rules are given by a set of atoms of form s < t where s and t are names. An ordered logic program is transformed into a second, regular, extended logic program wherein the preferences are respected, in that the answer sets obtained in the transformed program correspond with the preferred answer sets of the original program. Our approach allows the specification of dynamic orderings, in which preferences can appear arbitrarily within a program. Static orderings (in which preferences are external to a logic program) are a trivial restriction of the general dynamic case. First, we develop a specific approach to reasoning with preferences, wherein the preference ordering specifies the order in which rules are to be applied. We then demonstrate the wide range of applicability of our framework by showing how other approaches, among them that of Brewka and Eiter, can be captured within our framework. Since the result of each of these transformations is an extended logic program, we can make use of existing implementations, such as dlv and smodels. To this end, we have developed a publicly available compiler as a front-end for these programming systems.
[ { "version": "v1", "created": "Mon, 4 Mar 2002 13:00:41 GMT" }, { "version": "v2", "created": "Tue, 5 Mar 2002 14:01:51 GMT" } ]
1,179,878,400,000
[ [ "Delgrande", "J. P.", "" ], [ "Schaub", "T.", "" ], [ "Tompits", "H.", "" ] ]
cs/0203007
Yan Zhang
Yan Zhang
Two results for proiritized logic programming
20 pages, to be appeared in journal Theory and Practice of Logic Programming
null
null
null
cs.AI
null
Prioritized default reasoning has illustrated its rich expressiveness and flexibility in knowledge representation and reasoning. However, many important aspects of prioritized default reasoning have yet to be thoroughly explored. In this paper, we investigate two properties of prioritized logic programs in the context of answer set semantics. Specifically, we reveal a close relationship between mutual defeasibility and uniqueness of the answer set for a prioritized logic program. We then explore how the splitting technique for extended logic programs can be extended to prioritized logic programs. We prove splitting theorems that can be used to simplify the evaluation of a prioritized logic program under certain conditions.
[ { "version": "v1", "created": "Tue, 5 Mar 2002 00:28:04 GMT" } ]
1,179,878,400,000
[ [ "Zhang", "Yan", "" ] ]
cs/0204032
Daniel Lehmann
Michael Freund and Daniel Lehmann
Belief Revision and Rational Inference
25 pages
null
null
Leibniz Center for Research in Computer Science, Hebrew University: TR-94-16, July 1994
cs.AI
null
The (extended) AGM postulates for belief revision seem to deal with the revision of a given theory K by an arbitrary formula, but not to constrain the revisions of two different theories by the same formula. A new postulate is proposed and compared with other similar postulates that have been proposed in the literature. The AGM revisions that satisfy this new postulate stand in one-to-one correspondence with the rational, consistency-preserving relations. This correspondence is described explicitly. Two viewpoints on iterative revisions are distinguished and discussed.
[ { "version": "v1", "created": "Sun, 14 Apr 2002 09:22:42 GMT" } ]
1,179,878,400,000
[ [ "Freund", "Michael", "" ], [ "Lehmann", "Daniel", "" ] ]
cs/0205014
Miroslaw Truszczynski
Marc Denecker, Victor W. Marek and Miroslaw Truszczynski
Ultimate approximations in nonmonotonic knowledge representation systems
This paper was published in Principles of Knowledge Representation and Reasoning, Proceedings of the Eighth International Conference (KR2002)
null
null
null
cs.AI
null
We study fixpoints of operators on lattices. To this end we introduce the notion of an approximation of an operator. We order approximations by means of a precision ordering. We show that each lattice operator O has a unique most precise or ultimate approximation. We demonstrate that fixpoints of this ultimate approximation provide useful insights into fixpoints of the operator O. We apply our theory to logic programming and introduce the ultimate Kripke-Kleene, well-founded and stable semantics. We show that the ultimate Kripke-Kleene and well-founded semantics are more precise then their standard counterparts We argue that ultimate semantics for logic programming have attractive epistemological properties and that, while in general they are computationally more complex than the standard semantics, for many classes of theories, their complexity is no worse.
[ { "version": "v1", "created": "Sat, 11 May 2002 20:44:16 GMT" } ]
1,179,878,400,000
[ [ "Denecker", "Marc", "" ], [ "Marek", "Victor W.", "" ], [ "Truszczynski", "Miroslaw", "" ] ]
cs/0206003
Yan Zhang
Yan Zhang
Handling Defeasibilities in Action Domains
49 pages, 1 figure, to be appeared in journal Theory and Practice Logic Programming
null
null
null
cs.AI
null
Representing defeasibility is an important issue in common sense reasoning. In reasoning about action and change, this issue becomes more difficult because domain and action related defeasible information may conflict with general inertia rules. Furthermore, different types of defeasible information may also interfere with each other during the reasoning. In this paper, we develop a prioritized logic programming approach to handle defeasibilities in reasoning about action. In particular, we propose three action languages {\cal AT}^{0}, {\cal AT}^{1} and {\cal AT}^{2} which handle three types of defeasibilities in action domains named defeasible constraints, defeasible observations and actions with defeasible and abnormal effects respectively. Each language with a higher superscript can be viewed as an extension of the language with a lower superscript. These action languages inherit the simple syntax of {\cal A} language but their semantics is developed in terms of transition systems where transition functions are defined based on prioritized logic programs. By illustrating various examples, we show that our approach eventually provides a powerful mechanism to handle various defeasibilities in temporal prediction and postdiction. We also investigate semantic properties of these three action languages and characterize classes of action domains that present more desirable solutions in reasoning about action within the underlying action languages.
[ { "version": "v1", "created": "Mon, 3 Jun 2002 06:20:21 GMT" } ]
1,179,878,400,000
[ [ "Zhang", "Yan", "" ] ]
cs/0206041
Magnus Boman
Jarmo Laaksolahti, Magnus Boman
Anticipatory Guidance of Plot
19 pages, 5 figures
null
null
null
cs.AI
null
An anticipatory system for guiding plot development in interactive narratives is described. The executable model is a finite automaton that provides the implemented system with a look-ahead. The identification of undesirable future states in the model is used to guide the player, in a transparent manner. In this way, too radical twists of the plot can be avoided. Since the player participates in the development of the plot, such guidance can have many forms, depending on the environment of the player, on the behavior of the other players, and on the means of player interaction. We present a design method for interactive narratives which produces designs suitable for the implementation of anticipatory mechanisms. Use of the method is illustrated by application to our interactive computer game Kaktus.
[ { "version": "v1", "created": "Wed, 26 Jun 2002 09:17:13 GMT" }, { "version": "v2", "created": "Wed, 19 Feb 2003 10:58:39 GMT" } ]
1,179,878,400,000
[ [ "Laaksolahti", "Jarmo", "" ], [ "Boman", "Magnus", "" ] ]
cs/0207021
Piero A. Bonatti
Piero A. Bonatti
Abduction, ASP and Open Logic Programs
7 pages, NMR'02 Workshop
null
null
null
cs.AI
null
Open logic programs and open entailment have been recently proposed as an abstract framework for the verification of incomplete specifications based upon normal logic programs and the stable model semantics. There are obvious analogies between open predicates and abducible predicates. However, despite superficial similarities, there are features of open programs that have no immediate counterpart in the framework of abduction and viceversa. Similarly, open programs cannot be immediately simulated with answer set programming (ASP). In this paper we start a thorough investigation of the relationships between open inference, abduction and ASP. We shall prove that open programs generalize the other two frameworks. The generalized framework suggests interesting extensions of abduction under the generalized stable model semantics. In some cases, we will be able to reduce open inference to abduction and ASP, thereby estimating its computational complexity. At the same time, the aforementioned reduction opens the way to new applications of abduction and ASP.
[ { "version": "v1", "created": "Sun, 7 Jul 2002 09:55:00 GMT" } ]
1,179,878,400,000
[ [ "Bonatti", "Piero A.", "" ] ]
cs/0207023
Tran Cao Son
Tran Cao Son, Chitta Baral, Nam Tran, and Sheila McIlraith
Domain-Dependent Knowledge in Answer Set Planning
70 pages, accepted for publication, TOCL Version with all proofs
null
null
null
cs.AI
null
In this paper we consider three different kinds of domain-dependent control knowledge (temporal, procedural and HTN-based) that are useful in planning. Our approach is declarative and relies on the language of logic programming with answer set semantics (AnsProlog*). AnsProlog* is designed to plan without control knowledge. We show how temporal, procedural and HTN-based control knowledge can be incorporated into AnsProlog* by the modular addition of a small number of domain-dependent rules, without the need to modify the planner. We formally prove the correctness of our planner, both in the absence and presence of the control knowledge. Finally, we perform some initial experimentation that demonstrates the potential reduction in planning time that can be achieved when procedural domain knowledge is used to solve planning problems with large plan length.
[ { "version": "v1", "created": "Mon, 8 Jul 2002 00:31:54 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2005 20:46:16 GMT" } ]
1,179,878,400,000
[ [ "Son", "Tran Cao", "" ], [ "Baral", "Chitta", "" ], [ "Tran", "Nam", "" ], [ "McIlraith", "Sheila", "" ] ]
cs/0207025
Sylvie Doutre
C. Cayrol, S. Doutre, M.-C. Lagasquie-Schiex, J. Mengin
"Minimal defence": a refinement of the preferred semantics for argumentation frameworks
8 pages, 3 figures
Proceedings of the 9th International Workshop on Non-Monotonic Reasoning, 2002, pp. 408-415
null
null
cs.AI
null
Dung's abstract framework for argumentation enables a study of the interactions between arguments based solely on an ``attack'' binary relation on the set of arguments. Various ways to solve conflicts between contradictory pieces of information have been proposed in the context of argumentation, nonmonotonic reasoning or logic programming, and can be captured by appropriate semantics within Dung's framework. A common feature of these semantics is that one can always maximize in some sense the set of acceptable arguments. We propose in this paper to extend Dung's framework in order to allow for the representation of what we call ``restricted'' arguments: these arguments should only be used if absolutely necessary, that is, in order to support other arguments that would otherwise be defeated. We modify Dung's preferred semantics accordingly: a set of arguments becomes acceptable only if it contains a minimum of restricted arguments, for a maximum of unrestricted arguments.
[ { "version": "v1", "created": "Mon, 8 Jul 2002 13:30:16 GMT" } ]
1,179,878,400,000
[ [ "Cayrol", "C.", "" ], [ "Doutre", "S.", "" ], [ "Lagasquie-Schiex", "M. -C.", "" ], [ "Mengin", "J.", "" ] ]
cs/0207029
Alexander Bochman
Alexander Bochman
Two Representations for Iterative Non-prioritized Change
7 pages,Proceedings NMR'02, references added
null
null
null
cs.AI
null
We address a general representation problem for belief change, and describe two interrelated representations for iterative non-prioritized change: a logical representation in terms of persistent epistemic states, and a constructive representation in terms of flocks of bases.
[ { "version": "v1", "created": "Tue, 9 Jul 2002 12:32:45 GMT" }, { "version": "v2", "created": "Sun, 14 Jul 2002 16:26:23 GMT" } ]
1,179,878,400,000
[ [ "Bochman", "Alexander", "" ] ]
cs/0207030
Alexander Bochman
Alexander Bochman
Collective Argumentation
8 pages, Proceedings NMR'02, references added
null
null
null
cs.AI
null
An extension of an abstract argumentation framework, called collective argumentation, is introduced in which the attack relation is defined directly among sets of arguments. The extension turns out to be suitable, in particular, for representing semantics of disjunctive logic programs. Two special kinds of collective argumentation are considered in which the opponents can share their arguments.
[ { "version": "v1", "created": "Tue, 9 Jul 2002 12:42:24 GMT" }, { "version": "v2", "created": "Sun, 14 Jul 2002 16:21:33 GMT" } ]
1,179,878,400,000
[ [ "Bochman", "Alexander", "" ] ]
cs/0207042
Gerhard Brewka
Gerhard Brewka
Logic Programming with Ordered Disjunction
null
null
null
null
cs.AI
null
Logic programs with ordered disjunction (LPODs) combine ideas underlying Qualitative Choice Logic (Brewka et al. KR 2002) and answer set programming. Logic programming under answer set semantics is extended with a new connective called ordered disjunction. The new connective allows us to represent alternative, ranked options for problem solutions in the heads of rules: A \times B intuitively means: if possible A, but if A is not possible then at least B. The semantics of logic programs with ordered disjunction is based on a preference relation on answer sets. LPODs are useful for applications in design and configuration and can serve as a basis for qualitative decision making.
[ { "version": "v1", "created": "Thu, 11 Jul 2002 11:03:34 GMT" } ]
1,179,878,400,000
[ [ "Brewka", "Gerhard", "" ] ]
cs/0207045
Pierre Marquis
Adnan Darwiche and Pierre Marquis
Compilation of Propositional Weighted Bases
Proceedings of the Ninth International Workshop on Non-Monotonic Reasoning (NMR'02), Toulouse, 2002 (6-14)
null
null
null
cs.AI
null
In this paper, we investigate the extent to which knowledge compilation can be used to improve inference from propositional weighted bases. We present a general notion of compilation of a weighted base that is parametrized by any equivalence--preserving compilation function. Both negative and positive results are presented. On the one hand, complexity results are identified, showing that the inference problem from a compiled weighted base is as difficult as in the general case, when the prime implicates, Horn cover or renamable Horn cover classes are targeted. On the other hand, we show that the inference problem becomes tractable whenever DNNF-compilations are used and clausal queries are considered. Moreover, we show that the set of all preferred models of a DNNF-compilation of a weighted base can be computed in time polynomial in the output size. Finally, we sketch how our results can be used in model-based diagnosis in order to compute the most probable diagnoses of a system.
[ { "version": "v1", "created": "Thu, 11 Jul 2002 16:11:40 GMT" } ]
1,179,878,400,000
[ [ "Darwiche", "Adnan", "" ], [ "Marquis", "Pierre", "" ] ]
cs/0207056
Loizos Michael
Antonis Kakas and Loizos Michael
Modeling Complex Domains of Actions and Change
9 pages, 3 figures, to download the E-RES system and a full representation of the Zoo Scenario World, visit http://www.cs.ucy.ac.cy/~pslogic/
null
null
null
cs.AI
null
This paper studies the problem of modeling complex domains of actions and change within high-level action description languages. We investigate two main issues of concern: (a) can we represent complex domains that capture together different problems such as ramifications, non-determinism and concurrency of actions, at a high-level, close to the given natural ontology of the problem domain and (b) what features of such a representation can affect, and how, its computational behaviour. The paper describes the main problems faced in this representation task and presents the results of an empirical study, carried out through a series of controlled experiments, to analyze the computational performance of reasoning in these representations. The experiments compare different representations obtained, for example, by changing the basic ontology of the domain or by varying the degree of use of indirect effect laws through domain constraints. This study has helped to expose the main sources of computational difficulty in the reasoning and suggest some methodological guidelines for representing complex domains. Although our work has been carried out within one particular high-level description language, we believe that the results, especially those that relate to the problems of representation, are independent of the specific modeling language.
[ { "version": "v1", "created": "Sat, 13 Jul 2002 12:00:16 GMT" } ]
1,179,878,400,000
[ [ "Kakas", "Antonis", "" ], [ "Michael", "Loizos", "" ] ]
cs/0207059
T. Bench-Capon
T. Bench-Capon
Value Based Argumentation Frameworks
null
null
null
null
cs.AI
null
This paper introduces the notion of value-based argumentation frameworks, an extension of the standard argumentation frameworks proposed by Dung, which are able toshow how rational decision is possible in cases where arguments derive their force from the social values their acceptance would promote.
[ { "version": "v1", "created": "Mon, 15 Jul 2002 11:30:16 GMT" } ]
1,179,878,400,000
[ [ "Bench-Capon", "T.", "" ] ]
cs/0207060
Kewen Wang
Torsten Schaub, Kewen Wang
Preferred well-founded semantics for logic programming by alternating fixpoints: Preliminary report
Proceedings of the Workshop on Preferences in Artificial Intelligence and Constraint In: Proceedings of the Workshop on Non-Monotonic Reasoning (NMR'2002)
null
null
null
cs.AI
null
We analyze the problem of defining well-founded semantics for ordered logic programs within a general framework based on alternating fixpoint theory. We start by showing that generalizations of existing answer set approaches to preference are too weak in the setting of well-founded semantics. We then specify some informal yet intuitive criteria and propose a semantical framework for preference handling that is more suitable for defining well-founded semantics for ordered logic programs. The suitability of the new approach is convinced by the fact that many attractive properties are satisfied by our semantics. In particular, our semantics is still correct with respect to various existing answer sets semantics while it successfully overcomes the weakness of their generalization to well-founded semantics. Finally, we indicate how an existing preferred well-founded semantics can be captured within our semantical framework.
[ { "version": "v1", "created": "Mon, 15 Jul 2002 13:30:24 GMT" } ]
1,179,878,400,000
[ [ "Schaub", "Torsten", "" ], [ "Wang", "Kewen", "" ] ]
cs/0207065
Dritan Berzati
Dritan Berzati, Bernhard Anrig and Juerg Kohlas
Embedding Default Logic in Propositional Argumentation Systems
9 pages
null
null
null
cs.AI
null
In this paper we present a transformation of finite propositional default theories into so-called propositional argumentation systems. This transformation allows to characterize all notions of Reiter's default logic in the framework of argumentation systems. As a consequence, computing extensions, or determining wether a given formula belongs to one extension or all extensions can be answered without leaving the field of classical propositional logic. The transformation proposed is linear in the number of defaults.
[ { "version": "v1", "created": "Tue, 16 Jul 2002 15:16:07 GMT" } ]
1,179,878,400,000
[ [ "Berzati", "Dritan", "" ], [ "Anrig", "Bernhard", "" ], [ "Kohlas", "Juerg", "" ] ]
cs/0207067
Bart Verheij
Bart Verheij
On the existence and multiplicity of extensions in dialectical argumentation
10 pages; 9th International Workshop on Non-Monotonic Reasoning (NMR'2002)
Verheij, Bart (2002). On the existence and the multiplicity of extensions in dialectical argumentation. Proceedings of the 9th International Workshop on Non-Monotonic Reasoning (NMR'2002) (eds. S. Benferhat and E. Giunchiglia), pp. 416-425. Toulouse
null
null
cs.AI
null
In the present paper, the existence and multiplicity problems of extensions are addressed. The focus is on extension of the stable type. The main result of the paper is an elegant characterization of the existence and multiplicity of extensions in terms of the notion of dialectical justification, a close cousin of the notion of admissibility. The characterization is given in the context of the particular logic for dialectical argumentation DEFLOG. The results are of direct relevance for several well-established models of defeasible reasoning (like default logic, logic programming and argumentation frameworks), since elsewhere dialectical argumentation has been shown to have close formal connections with these models.
[ { "version": "v1", "created": "Wed, 17 Jul 2002 12:09:45 GMT" } ]
1,179,878,400,000
[ [ "Verheij", "Bart", "" ] ]
cs/0207075
Thomas Lukasiewicz
Thomas Lukasiewicz
Nonmonotonic Probabilistic Logics between Model-Theoretic Probabilistic Logic and Probabilistic Logic under Coherence
10 pages; in Proceedings of the 9th International Workshop on Non-Monotonic Reasoning (NMR-2002), Special Session on Uncertainty Frameworks in Nonmonotonic Reasoning, pages 265-274, Toulouse, France, April 2002
null
null
null
cs.AI
null
Recently, it has been shown that probabilistic entailment under coherence is weaker than model-theoretic probabilistic entailment. Moreover, probabilistic entailment under coherence is a generalization of default entailment in System P. In this paper, we continue this line of research by presenting probabilistic generalizations of more sophisticated notions of classical default entailment that lie between model-theoretic probabilistic entailment and probabilistic entailment under coherence. That is, the new formalisms properly generalize their counterparts in classical default reasoning, they are weaker than model-theoretic probabilistic entailment, and they are stronger than probabilistic entailment under coherence. The new formalisms are useful especially for handling probabilistic inconsistencies related to conditioning on zero events. They can also be applied for probabilistic belief revision. More generally, in the same spirit as a similar previous paper, this paper sheds light on exciting new formalisms for probabilistic reasoning beyond the well-known standard ones.
[ { "version": "v1", "created": "Mon, 22 Jul 2002 01:44:25 GMT" } ]
1,179,878,400,000
[ [ "Lukasiewicz", "Thomas", "" ] ]
cs/0207083
Choh Man Teng
Henry E. Kyburg Jr. and Choh Man Teng
Evaluating Defaults
8 pages
null
null
null
cs.AI
null
We seek to find normative criteria of adequacy for nonmonotonic logic similar to the criterion of validity for deductive logic. Rather than stipulating that the conclusion of an inference be true in all models in which the premises are true, we require that the conclusion of a nonmonotonic inference be true in ``almost all'' models of a certain sort in which the premises are true. This ``certain sort'' specification picks out the models that are relevant to the inference, taking into account factors such as specificity and vagueness, and previous inferences. The frequencies characterizing the relevant models reflect known frequencies in our actual world. The criteria of adequacy for a default inference can be extended by thresholding to criteria of adequacy for an extension. We show that this avoids the implausibilities that might otherwise result from the chaining of default inferences. The model proportions, when construed in terms of frequencies, provide a verifiable grounding of default rules, and can become the basis for generating default rules from statistics.
[ { "version": "v1", "created": "Wed, 24 Jul 2002 23:05:29 GMT" } ]
1,179,878,400,000
[ [ "Kyburg", "Henry E.", "Jr." ], [ "Teng", "Choh Man", "" ] ]
cs/0208017
Moinard
Yves Moinard
Linking Makinson and Kraus-Lehmann-Magidor preferential entailments
Proceedings of the 9th Int. Workshop on Non-Monotonic Reasoning (NMR'2002), Toulouse, France, April 19-21, 2002. Also, paper with the same Title at ECAI 2002 (15th European Conf. on A.I.)
null
null
null
cs.AI
null
About ten years ago, various notions of preferential entailment have been introduced. The main reference is a paper by Kraus, Lehmann and Magidor (KLM), one of the main competitor being a more general version defined by Makinson (MAK). These two versions have already been compared, but it is time to revisit these comparisons. Here are our three main results: (1) These two notions are equivalent, provided that we restrict our attention, as done in KLM, to the cases where the entailment respects logical equivalence (on the left and on the right). (2) A serious simplification of the description of the fundamental cases in which MAK is equivalent to KLM, including a natural passage in both ways. (3) The two previous results are given for preferential entailments more general than considered in some of the original texts, but they apply also to the original definitions and, for this particular case also, the models can be simplified.
[ { "version": "v1", "created": "Thu, 8 Aug 2002 17:08:46 GMT" } ]
1,179,878,400,000
[ [ "Moinard", "Yves", "" ] ]
cs/0208019
Mikalai Birukou
Mikalai Birukou
Knowledge Representation
null
null
null
null
cs.AI
null
This work analyses main features that should be present in knowledge representation. It suggests a model for representation and a way to implement this model in software. Representation takes care of both low-level sensor information and high-level concepts.
[ { "version": "v1", "created": "Mon, 12 Aug 2002 22:34:47 GMT" } ]
1,179,878,400,000
[ [ "Birukou", "Mikalai", "" ] ]
cs/0208034
Joseph Y. Halpern
Joseph Y. Halpern and Judea Pearl
Causes and Explanations: A Structural-Model Approach. Part II: Explanations
Part I of the paper (on causes) is also on the arxiv. The two papers originally were posted as one submission. The conference version of the paper appears in IJCAI '01. This paper will appear in the British Journal for Philosophy of Science
null
null
null
cs.AI
null
We propose new definitions of (causal) explanation, using structural equations to model counterfactuals. The definition is based on the notion of actual cause, as defined and motivated in a companion paper. Essentially, an explanation is a fact that is not known for certain but, if found to be true, would constitute an actual cause of the fact to be explained, regardless of the agent's initial uncertainty. We show that the definition handles well a number of problematic examples from the literature.
[ { "version": "v1", "created": "Tue, 20 Aug 2002 23:08:49 GMT" }, { "version": "v2", "created": "Mon, 7 Nov 2005 20:13:44 GMT" }, { "version": "v3", "created": "Sat, 19 Nov 2005 23:16:59 GMT" } ]
1,179,878,400,000
[ [ "Halpern", "Joseph Y.", "" ], [ "Pearl", "Judea", "" ] ]
cs/0209019
Thomas Eiter
T. Eiter, M. Fink, G. Sabbatini, H. Tompits
Reasoning about Evolving Nonmonotonic Knowledge Bases
47 pages.A preliminary version appeared in: Proc. 8th International Conference on Logic for Programming, Artificial Intelligence and Reasoning (LPAR 2001), R. Nieuwenhuis and A. Voronkov (eds), pp. 407--421, LNCS 2250, Springer 2001
null
null
INFSYS RR-1843-02-11
cs.AI
null
Recently, several approaches to updating knowledge bases modeled as extended logic programs have been introduced, ranging from basic methods to incorporate (sequences of) sets of rules into a logic program, to more elaborate methods which use an update policy for specifying how updates must be incorporated. In this paper, we introduce a framework for reasoning about evolving knowledge bases, which are represented as extended logic programs and maintained by an update policy. We first describe a formal model which captures various update approaches, and we define a logical language for expressing properties of evolving knowledge bases. We then investigate semantical and computational properties of our framework, where we focus on properties of knowledge states with respect to the canonical reasoning task of whether a given formula holds on a given evolving knowledge base. In particular, we present finitary characterizations of the evolution for certain classes of framework instances, which can be exploited for obtaining decidability results. In more detail, we characterize the complexity of reasoning for some meaningful classes of evolving knowledge bases, ranging from polynomial to double exponential space complexity.
[ { "version": "v1", "created": "Mon, 16 Sep 2002 19:23:19 GMT" } ]
1,179,878,400,000
[ [ "Eiter", "T.", "" ], [ "Fink", "M.", "" ], [ "Sabbatini", "G.", "" ], [ "Tompits", "H.", "" ] ]
cs/0209022
Carlos Gershenson
Carlos Gershenson
A Comparison of Different Cognitive Paradigms Using Simple Animats in a Virtual Laboratory, with Implications to the Notion of Cognition
MSc Thesis, University of Sussex. pdf available from http://www.cogs.susx.ac.uk/users/carlos
null
null
null
cs.AI
null
In this thesis I present a virtual laboratory which implements five different models for controlling animats: a rule-based system, a behaviour-based system, a concept-based system, a neural network, and a Braitenberg architecture. Through different experiments, I compare the performance of the models and conclude that there is no "best" model, since different models are better for different things in different contexts. The models I chose, although quite simple, represent different approaches for studying cognition. Using the results as an empirical philosophical aid, I note that there is no "best" approach for studying cognition, since different approaches have all advantages and disadvantages, because they study different aspects of cognition from different contexts. This has implications for current debates on "proper" approaches for cognition: all approaches are a bit proper, but none will be "proper enough". I draw remarks on the notion of cognition abstracting from all the approaches used to study it, and propose a simple classification for different types of cognition.
[ { "version": "v1", "created": "Thu, 19 Sep 2002 16:35:55 GMT" }, { "version": "v2", "created": "Mon, 23 Sep 2002 12:41:57 GMT" } ]
1,254,182,400,000
[ [ "Gershenson", "Carlos", "" ] ]
cs/0210004
Sylvain Lagrue
Salem Benferhat, Sylvain Lagrue, Odile Papini
Revising Partially Ordered Beliefs
figures made with the pstricks latex packages
Proc. of the 9th Workshop on Non-monotonic Reasoning (NMR'2002), pp. 142--149
null
null
cs.AI
null
This paper deals with the revision of partially ordered beliefs. It proposes a semantic representation of epistemic states by partial pre-orders on interpretations and a syntactic representation by partially ordered belief bases. Two revision operations, the revision stemming from the history of observations and the possibilistic revision, defined when the epistemic state is represented by a total pre-order, are generalized, at a semantic level, to the case of a partial pre-order on interpretations, and at a syntactic level, to the case of a partially ordered belief base. The equivalence between the two representations is shown for the two revision operations.
[ { "version": "v1", "created": "Thu, 3 Oct 2002 09:29:54 GMT" } ]
1,179,878,400,000
[ [ "Benferhat", "Salem", "" ], [ "Lagrue", "Sylvain", "" ], [ "Papini", "Odile", "" ] ]
cs/0211008
Victor Eliashberg
Victor Eliashberg
Can the whole brain be simpler than its "parts"?
No figures
null
null
AER0-02-10
cs.AI
null
This is the first in a series of connected papers discussing the problem of a dynamically reconfigurable universal learning neurocomputer that could serve as a computational model for the whole human brain. The whole series is entitled "The Brain Zero Project. My Brain as a Dynamically Reconfigurable Universal Learning Neurocomputer." (For more information visit the website www.brain0.com.) This introductory paper is concerned with general methodology. Its main goal is to explain why it is critically important for both neural modeling and cognitive modeling to pay much attention to the basic requirements of the whole brain as a complex computing system. The author argues that it can be easier to develop an adequate computational model for the whole "unprogrammed" (untrained) human brain than to find adequate formal representations of some nontrivial parts of brain's performance. (In the same way as, for example, it is easier to describe the behavior of a complex analytical function than the behavior of its real and/or imaginary part.) The "curse of dimensionality" that plagues purely phenomenological ("brainless") cognitive theories is a natural penalty for an attempt to represent insufficiently large parts of brain's performance in a state space of insufficiently high dimensionality. A "partial" modeler encounters "Catch 22." An attempt to simplify a cognitive problem by artificially reducing its dimensionality makes the problem more difficult.
[ { "version": "v1", "created": "Sat, 9 Nov 2002 17:16:18 GMT" }, { "version": "v2", "created": "Tue, 12 Nov 2002 04:59:40 GMT" }, { "version": "v3", "created": "Wed, 13 Nov 2002 06:06:59 GMT" }, { "version": "v4", "created": "Sat, 16 Nov 2002 01:01:06 GMT" } ]
1,179,878,400,000
[ [ "Eliashberg", "Victor", "" ] ]
cs/0211027
Carlos Gershenson
Carlos Gershenson
Adaptive Development of Koncepts in Virtual Animats: Insights into the Development of Knowledge
15 pages, COGS Adaptive Systems Essay
null
null
null
cs.AI
null
As a part of our effort for studying the evolution and development of cognition, we present results derived from synthetic experimentations in a virtual laboratory where animats develop koncepts adaptively and ground their meaning through action. We introduce the term "koncept" to avoid confusions and ambiguity derived from the wide use of the word "concept". We present the models which our animats use for abstracting koncepts from perceptions, plastically adapt koncepts, and associate koncepts with actions. On a more philosophical vein, we suggest that knowledge is a property of a cognitive system, not an element, and therefore observer-dependent.
[ { "version": "v1", "created": "Thu, 21 Nov 2002 18:13:31 GMT" } ]
1,179,878,400,000
[ [ "Gershenson", "Carlos", "" ] ]
cs/0211038
Carlos Gershenson
Carlos Gershenson, Pedro Pablo Gonzalez
Dynamic Adjustment of the Motivation Degree in an Action Selection Mechanism
7 pages, Proceedings of ISA '2000. Wollongong, Australia
null
null
null
cs.AI
null
This paper presents a model for dynamic adjustment of the motivation degree, using a reinforcement learning approach, in an action selection mechanism previously developed by the authors. The learning takes place in the modification of a parameter of the model of combination of internal and external stimuli. Experiments that show the claimed properties are presented, using a VR simulation developed for such purposes. The importance of adaptation by learning in action selection is also discussed.
[ { "version": "v1", "created": "Wed, 27 Nov 2002 10:35:50 GMT" } ]
1,179,878,400,000
[ [ "Gershenson", "Carlos", "" ], [ "Gonzalez", "Pedro Pablo", "" ] ]
cs/0211039
Carlos Gershenson
Carlos Gershenson Garcia, Pedro Pablo Gonzalez Perez, Jose Negrete Martinez
Action Selection Properties in a Software Simulated Agent
12 pages, in MICAI 2000: Advances in Artificial Intelligence. Lecture Notes in Artificial Intelligence 1793, pp. 634-648. Springer-Verlag
# MICAI 2000: Advances in Artificial Intelligence. Lecture Notes in Artificial Intelligence 1793, pp. 634-648. Springer-Verlag
null
null
cs.AI
null
This article analyses the properties of the Internal Behaviour network, an action selection mechanism previously proposed by the authors, with the aid of a simulation developed for such ends. A brief review of the Internal Behaviour network is followed by the explanation of the implementation of the simulation. Then, experiments are presented and discussed analysing the properties of the action selection in the proposed model.
[ { "version": "v1", "created": "Wed, 27 Nov 2002 10:42:31 GMT" } ]
1,179,878,400,000
[ [ "Garcia", "Carlos Gershenson", "" ], [ "Perez", "Pedro Pablo Gonzalez", "" ], [ "Martinez", "Jose Negrete", "" ] ]
cs/0211040
Carlos Gershenson
Pedro Pablo Gonzalez Perez, Jose Negrete Martinez, Ariel Barreiro Garcia, Carlos Gershenson Garcia
A Model for Combination of External and Internal Stimuli in the Action Selection of an Autonomous Agent
13 pages, in MICAI 2000: Advances in Artificial Intelligence. Lecture Notes in Artificial Intelligence 1793, pp. 621-633. Springer-Verlag
MICAI 2000: Advances in Artificial Intelligence. Lecture Notes in Artificial Intelligence 1793, pp. 621-633. Springer-Verlag
null
null
cs.AI
null
This paper proposes a model for combination of external and internal stimuli for the action selection in an autonomous agent, based in an action selection mechanism previously proposed by the authors. This combination model includes additive and multiplicative elements, which allows to incorporate new properties, which enhance the action selection. A given parameter a, which is part of the proposed model, allows to regulate the degree of dependence of the observed external behaviour from the internal states of the entity.
[ { "version": "v1", "created": "Wed, 27 Nov 2002 10:45:50 GMT" } ]
1,179,878,400,000
[ [ "Perez", "Pedro Pablo Gonzalez", "" ], [ "Martinez", "Jose Negrete", "" ], [ "Garcia", "Ariel Barreiro", "" ], [ "Garcia", "Carlos Gershenson", "" ] ]
cs/0212025
Balint Takacs
Istvan Szita, Balint Takacs and Andras Lorincz
Searching for Plannable Domains can Speed up Reinforcement Learning
null
null
null
null
cs.AI
null
Reinforcement learning (RL) involves sequential decision making in uncertain environments. The aim of the decision-making agent is to maximize the benefit of acting in its environment over an extended period of time. Finding an optimal policy in RL may be very slow. To speed up learning, one often used solution is the integration of planning, for example, Sutton's Dyna algorithm, or various other methods using macro-actions. Here we suggest to separate plannable, i.e., close to deterministic parts of the world, and focus planning efforts in this domain. A novel reinforcement learning method called plannable RL (pRL) is proposed here. pRL builds a simple model, which is used to search for macro actions. The simplicity of the model makes planning computationally inexpensive. It is shown that pRL finds an optimal policy, and that plannable macro actions found by pRL are near-optimal. In turn, it is unnecessary to try large numbers of macro actions, which enables fast learning. The utility of pRL is demonstrated by computer simulations.
[ { "version": "v1", "created": "Tue, 10 Dec 2002 22:15:25 GMT" } ]
1,179,878,400,000
[ [ "Szita", "Istvan", "" ], [ "Takacs", "Balint", "" ], [ "Lorincz", "Andras", "" ] ]
cs/0301006
Balint Takacs
Balint Takacs, Istvan Szita, Andras Lorincz
Temporal plannability by variance of the episode length
null
null
null
null
cs.AI
null
Optimization of decision problems in stochastic environments is usually concerned with maximizing the probability of achieving the goal and minimizing the expected episode length. For interacting agents in time-critical applications, learning of the possibility of scheduling of subtasks (events) or the full task is an additional relevant issue. Besides, there exist highly stochastic problems where the actual trajectories show great variety from episode to episode, but completing the task takes almost the same amount of time. The identification of sub-problems of this nature may promote e.g., planning, scheduling and segmenting Markov decision processes. In this work, formulae for the average duration as well as the standard deviation of the duration of events are derived. The emerging Bellman-type equation is a simple extension of Sobel's work (1982). Methods of dynamic programming as well as methods of reinforcement learning can be applied for our extension. Computer demonstration on a toy problem serve to highlight the principle.
[ { "version": "v1", "created": "Thu, 9 Jan 2003 12:39:03 GMT" } ]
1,179,878,400,000
[ [ "Takacs", "Balint", "" ], [ "Szita", "Istvan", "" ], [ "Lorincz", "Andras", "" ] ]
cs/0301010
Kewen Wang
Kewen Wang, Lizhu Zhou
Comparisons and Computation of Well-founded Semantics for Disjunctive Logic Programs
31 pages
null
null
null
cs.AI
null
Much work has been done on extending the well-founded semantics to general disjunctive logic programs and various approaches have been proposed. However, these semantics are different from each other and no consensus is reached about which semantics is the most intended. In this paper we look at disjunctive well-founded reasoning from different angles. We show that there is an intuitive form of the well-founded reasoning in disjunctive logic programming which can be characterized by slightly modifying some exisitng approaches to defining disjunctive well-founded semantics, including program transformations, argumentation, unfounded sets (and resolution-like procedure). We also provide a bottom-up procedure for this semantics. The significance of our work is not only in clarifying the relationship among different approaches, but also shed some light on what is an intended well-founded semantics for disjunctive logic programs.
[ { "version": "v1", "created": "Tue, 14 Jan 2003 08:14:43 GMT" }, { "version": "v2", "created": "Thu, 16 Jan 2003 01:10:15 GMT" } ]
1,179,878,400,000
[ [ "Wang", "Kewen", "" ], [ "Zhou", "Lizhu", "" ] ]
cs/0301023
T. Schaub
Torsten Schaub and Kewen Wang
A semantic framework for preference handling in answer set programming
39 pages. To appear in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
We provide a semantic framework for preference handling in answer set programming. To this end, we introduce preference preserving consequence operators. The resulting fixpoint characterizations provide us with a uniform semantic framework for characterizing preference handling in existing approaches. Although our approach is extensible to other semantics by means of an alternating fixpoint theory, we focus here on the elaboration of preferences under answer set semantics. Alternatively, we show how these approaches can be characterized by the concept of order preservation. These uniform semantic characterizations provide us with new insights about interrelationships and moreover about ways of implementation.
[ { "version": "v1", "created": "Thu, 23 Jan 2003 09:09:31 GMT" } ]
1,179,878,400,000
[ [ "Schaub", "Torsten", "" ], [ "Wang", "Kewen", "" ] ]
cs/0302029
Alejandro Javier Garcia
Alejandro Javier Garcia and Guillermo Ricardo Simari
Defeasible Logic Programming: An Argumentative Approach
43 pages, to appear in the journal "Theory and Practice of Logic Programming"
null
null
null
cs.AI
null
The work reported here introduces Defeasible Logic Programming (DeLP), a formalism that combines results of Logic Programming and Defeasible Argumentation. DeLP provides the possibility of representing information in the form of weak rules in a declarative manner, and a defeasible argumentation inference mechanism for warranting the entailed conclusions. In DeLP an argumentation formalism will be used for deciding between contradictory goals. Queries will be supported by arguments that could be defeated by other arguments. A query q will succeed when there is an argument A for q that is warranted, ie, the argument A that supports q is found undefeated by a warrant procedure that implements a dialectical analysis. The defeasible argumentation basis of DeLP allows to build applications that deal with incomplete and contradictory information in dynamic domains. Thus, the resulting approach is suitable for representing agent's knowledge and for providing an argumentation based reasoning mechanism to agents.
[ { "version": "v1", "created": "Thu, 20 Feb 2003 00:48:06 GMT" } ]
1,179,878,400,000
[ [ "Garcia", "Alejandro Javier", "" ], [ "Simari", "Guillermo Ricardo", "" ] ]
cs/0302036
Evgueni Petrov
Evgueni Petrov, Eric Monfroy
Constraint-based analysis of composite solvers
submitted to AI SAC 2004
null
null
null
cs.AI
null
Cooperative constraint solving is an area of constraint programming that studies the interaction between constraint solvers with the aim of discovering the interaction patterns that amplify the positive qualities of individual solvers. Automatisation and formalisation of such studies is an important issue of cooperative constraint solving. In this paper we present a constraint-based analysis of composite solvers that integrates reasoning about the individual solvers and the processed data. The idea is to approximate this reasoning by resolution of set constraints on the finite sets representing the predicates that express all the necessary properties. We illustrate application of our analysis to two important cooperation patterns: deterministic choice and loop.
[ { "version": "v1", "created": "Tue, 25 Feb 2003 14:33:08 GMT" }, { "version": "v2", "created": "Sun, 7 Sep 2003 23:03:03 GMT" } ]
1,179,878,400,000
[ [ "Petrov", "Evgueni", "" ], [ "Monfroy", "Eric", "" ] ]
cs/0302039
Barnabas Poczos
Barnabas Poczos, Andras Lorincz
Kalman-filtering using local interactions
null
null
null
null
cs.AI
null
There is a growing interest in using Kalman-filter models for brain modelling. In turn, it is of considerable importance to represent Kalman-filter in connectionist forms with local Hebbian learning rules. To our best knowledge, Kalman-filter has not been given such local representation. It seems that the main obstacle is the dynamic adaptation of the Kalman-gain. Here, a connectionist representation is presented, which is derived by means of the recursive prediction error method. We show that this method gives rise to attractive local learning rules and can adapt the Kalman-gain.
[ { "version": "v1", "created": "Fri, 28 Feb 2003 18:32:26 GMT" } ]
1,179,878,400,000
[ [ "Poczos", "Barnabas", "" ], [ "Lorincz", "Andras", "" ] ]
cs/0303006
Carlos Gershenson
Carlos Gershenson
On the Notion of Cognition
6 pages, 2 figures
null
null
null
cs.AI
null
We discuss philosophical issues concerning the notion of cognition basing ourselves in experimental results in cognitive sciences, especially in computer simulations of cognitive systems. There have been debates on the "proper" approach for studying cognition, but we have realized that all approaches can be in theory equivalent. Different approaches model different properties of cognitive systems from different perspectives, so we can only learn from all of them. We also integrate ideas from several perspectives for enhancing the notion of cognition, such that it can contain other definitions of cognition as special cases. This allows us to propose a simple classification of different types of cognition.
[ { "version": "v1", "created": "Mon, 10 Mar 2003 18:20:28 GMT" } ]
1,179,878,400,000
[ [ "Gershenson", "Carlos", "" ] ]
cs/0303009
Tomi Janhunen
T. Janhunen, I. Niemela, D. Seipel, P. Simons, J. You
Unfolding Partiality and Disjunctions in Stable Model Semantics
49 pages, 4 figures, 1 table
null
null
null
cs.AI
null
The paper studies an implementation methodology for partial and disjunctive stable models where partiality and disjunctions are unfolded from a logic program so that an implementation of stable models for normal (disjunction-free) programs can be used as the core inference engine. The unfolding is done in two separate steps. Firstly, it is shown that partial stable models can be captured by total stable models using a simple linear and modular program transformation. Hence, reasoning tasks concerning partial stable models can be solved using an implementation of total stable models. Disjunctive partial stable models have been lacking implementations which now become available as the translation handles also the disjunctive case. Secondly, it is shown how total stable models of disjunctive programs can be determined by computing stable models for normal programs. Hence, an implementation of stable models of normal programs can be used as a core engine for implementing disjunctive programs. The feasibility of the approach is demonstrated by constructing a system for computing stable models of disjunctive programs using the smodels system as the core engine. The performance of the resulting system is compared to that of dlv which is a state-of-the-art special purpose system for disjunctive programs.
[ { "version": "v1", "created": "Fri, 14 Mar 2003 14:29:32 GMT" }, { "version": "v2", "created": "Fri, 2 Jan 2004 14:27:37 GMT" } ]
1,179,878,400,000
[ [ "Janhunen", "T.", "" ], [ "Niemela", "I.", "" ], [ "Seipel", "D.", "" ], [ "Simons", "P.", "" ], [ "You", "J.", "" ] ]
cs/0303018
Hedvig Sidenbladh
Hedvig Sidenbladh
Multi-target particle filtering for the probability hypothesis density
Submitted to International Conference on Information Fusion 2003
null
null
null
cs.AI
null
When tracking a large number of targets, it is often computationally expensive to represent the full joint distribution over target states. In cases where the targets move independently, each target can instead be tracked with a separate filter. However, this leads to a model-data association problem. Another approach to solve the problem with computational complexity is to track only the first moment of the joint distribution, the probability hypothesis density (PHD). The integral of this distribution over any area S is the expected number of targets within S. Since no record of object identity is kept, the model-data association problem is avoided. The contribution of this paper is a particle filter implementation of the PHD filter mentioned above. This PHD particle filter is applied to tracking of multiple vehicles in terrain, a non-linear tracking problem. Experiments show that the filter can track a changing number of vehicles robustly, achieving near-real-time performance.
[ { "version": "v1", "created": "Thu, 20 Mar 2003 13:48:04 GMT" } ]
1,179,878,400,000
[ [ "Sidenbladh", "Hedvig", "" ] ]
cs/0305001
Ambuj Mahanti
Ambuj Mahanti, Supriyo Ghose and Samir K. Sadhukhan
A Framework for Searching AND/OR Graphs with Cycles
40 pages, 20 figures, 5 tables
null
null
null
cs.AI
null
Search in cyclic AND/OR graphs was traditionally known to be an unsolved problem. In the recent past several important studies have been reported in this domain. In this paper, we have taken a fresh look at the problem. First, a new and comprehensive theoretical framework for cyclic AND/OR graphs has been presented, which was found missing in the recent literature. Based on this framework, two best-first search algorithms, S1 and S2, have been developed. S1 does uninformed search and is a simple modification of the Bottom-up algorithm by Martelli and Montanari. S2 performs a heuristically guided search and replicates the modification in Bottom-up's successors, namely HS and AO*. Both S1 and S2 solve the problem of searching AND/OR graphs in presence of cycles. We then present a detailed analysis for the correctness and complexity results of S1 and S2, using the proposed framework. We have observed through experiments that S1 and S2 output correct results in all cases.
[ { "version": "v1", "created": "Thu, 1 May 2003 04:48:29 GMT" } ]
1,179,878,400,000
[ [ "Mahanti", "Ambuj", "" ], [ "Ghose", "Supriyo", "" ], [ "Sadhukhan", "Samir K.", "" ] ]
cs/0305019
Johan Schubert
Johan Schubert
On rho in a Decision-Theoretic Apparatus of Dempster-Shafer Theory
16 pages, 2 figures
International Journal of Approximate Reasoning 13(3), 185-200, 1995
null
FOA-B-95-00097-3.4-SE
cs.AI
null
Thomas M. Strat has developed a decision-theoretic apparatus for Dempster-Shafer theory (Decision analysis using belief functions, Intern. J. Approx. Reason. 4(5/6), 391-417, 1990). In this apparatus, expected utility intervals are constructed for different choices. The choice with the highest expected utility is preferable to others. However, to find the preferred choice when the expected utility interval of one choice is included in that of another, it is necessary to interpolate a discerning point in the intervals. This is done by the parameter rho, defined as the probability that the ambiguity about the utility of every nonsingleton focal element will turn out as favorable as possible. If there are several different decision makers, we might sometimes be more interested in having the highest expected utility among the decision makers rather than only trying to maximize our own expected utility regardless of choices made by other decision makers. The preference of each choice is then determined by the probability of yielding the highest expected utility. This probability is equal to the maximal interval length of rho under which an alternative is preferred. We must here take into account not only the choices already made by other decision makers but also the rational choices we can assume to be made by later decision makers. In Strats apparatus, an assumption, unwarranted by the evidence at hand, has to be made about the value of rho. We demonstrate that no such assumption is necessary. It is sufficient to assume a uniform probability distribution for rho to be able to discern the most preferable choice. We discuss when this approach is justifiable.
[ { "version": "v1", "created": "Fri, 16 May 2003 15:07:09 GMT" } ]
1,416,182,400,000
[ [ "Schubert", "Johan", "" ] ]
cs/0305044
Marco Zaffalon
Gert de Cooman and Marco Zaffalon
Updating beliefs with incomplete observations
Replaced with extended version
null
null
null
cs.AI
null
Currently, there is renewed interest in the problem, raised by Shafer in 1985, of updating probabilities when observations are incomplete. This is a fundamental problem in general, and of particular interest for Bayesian networks. Recently, Grunwald and Halpern have shown that commonly used updating strategies fail in this case, except under very special assumptions. In this paper we propose a new method for updating probabilities with incomplete observations. Our approach is deliberately conservative: we make no assumptions about the so-called incompleteness mechanism that associates complete with incomplete observations. We model our ignorance about this mechanism by a vacuous lower prevision, a tool from the theory of imprecise probabilities, and we use only coherence arguments to turn prior into posterior probabilities. In general, this new approach to updating produces lower and upper posterior probabilities and expectations, as well as partially determinate decisions. This is a logical consequence of the existing ignorance about the incompleteness mechanism. We apply the new approach to the problem of classification of new evidence in probabilistic expert systems, where it leads to a new, so-called conservative updating rule. In the special case of Bayesian networks constructed using expert knowledge, we provide an exact algorithm for classification based on our updating rule, which has linear-time complexity for a class of networks wider than polytrees. This result is then extended to the more general framework of credal networks, where computations are often much harder than with Bayesian nets. Using an example, we show that our rule appears to provide a solid basis for reliable updating with incomplete observations, when no strong assumptions about the incompleteness mechanism are justified.
[ { "version": "v1", "created": "Tue, 27 May 2003 11:05:52 GMT" }, { "version": "v2", "created": "Mon, 17 May 2004 15:03:28 GMT" } ]
1,179,878,400,000
[ [ "de Cooman", "Gert", "" ], [ "Zaffalon", "Marco", "" ] ]
cs/0306124
Joseph Y. Halpern
Peter D. Grunwald and Joseph Y. Halpern
Updating Probabilities
This is an expanded version of a paper that appeared in Proceedings of the Eighteenth Conference on Uncertainty in AI, 2002, pp. 187--196. to appear, Journal of AI Research
null
null
null
cs.AI
null
As examples such as the Monty Hall puzzle show, applying conditioning to update a probability distribution on a ``naive space'', which does not take into account the protocol used, can often lead to counterintuitive results. Here we examine why. A criterion known as CAR (``coarsening at random'') in the statistical literature characterizes when ``naive'' conditioning in a naive space works. We show that the CAR condition holds rather infrequently, and we provide a procedural characterization of it, by giving a randomized algorithm that generates all and only distributions for which CAR holds. This substantially extends previous characterizations of CAR. We also consider more generalized notions of update such as Jeffrey conditioning and minimizing relative entropy (MRE). We give a generalization of the CAR condition that characterizes when Jeffrey conditioning leads to appropriate answers, and show that there exist some very simple settings in which MRE essentially never gives the right results. This generalizes and interconnects previous results obtained in the literature on CAR and MRE.
[ { "version": "v1", "created": "Mon, 23 Jun 2003 22:24:05 GMT" } ]
1,179,878,400,000
[ [ "Grunwald", "Peter D.", "" ], [ "Halpern", "Joseph Y.", "" ] ]
cs/0306135
Laurent Henocque
Stephane Grandcolas and Laurent Henocque and Nicolas Prcovic
Pruning Isomorphic Structural Sub-problems in Configuration
This research report contains the proofs and full details missing from the short paper "A Canonicity Test for Configuration" in proceedings of conference CP'03
null
null
LSIS-2003-004
cs.AI
null
Configuring consists in simulating the realization of a complex product from a catalog of component parts, using known relations between types, and picking values for object attributes. This highly combinatorial problem in the field of constraint programming has been addressed with a variety of approaches since the foundation system R1(McDermott82). An inherent difficulty in solving configuration problems is the existence of many isomorphisms among interpretations. We describe a formalism independent approach to improve the detection of isomorphisms by configurators, which does not require to adapt the problem model. To achieve this, we exploit the properties of a characteristic subset of configuration problems, called the structural sub-problem, which canonical solutions can be produced or tested at a limited cost. In this paper we present an algorithm for testing the canonicity of configurations, that can be added as a symmetry breaking constraint to any configurator. The cost and efficiency of this canonicity test are given.
[ { "version": "v1", "created": "Fri, 27 Jun 2003 11:25:17 GMT" } ]
1,179,878,400,000
[ [ "Grandcolas", "Stephane", "" ], [ "Henocque", "Laurent", "" ], [ "Prcovic", "Nicolas", "" ] ]
cs/0307010
J. G. Wolff
J Gerard Wolff
Probabilistic Reasoning as Information Compression by Multiple Alignment, Unification and Search: An Introduction and Overview
null
Journal of Universal Computer Science 5 (7), 418--462, 1999
null
null
cs.AI
null
This article introduces the idea that probabilistic reasoning (PR) may be understood as "information compression by multiple alignment, unification and search" (ICMAUS). In this context, multiple alignment has a meaning which is similar to but distinct from its meaning in bio-informatics, while unification means a simple merging of matching patterns, a meaning which is related to but simpler than the meaning of that term in logic. A software model, SP61, has been developed for the discovery and formation of 'good' multiple alignments, evaluated in terms of information compression. The model is described in outline. Using examples from the SP61 model, this article describes in outline how the ICMAUS framework can model various kinds of PR including: PR in best-match pattern recognition and information retrieval; one-step 'deductive' and 'abductive' PR; inheritance of attributes in a class hierarchy; chains of reasoning (probabilistic decision networks and decision trees, and PR with 'rules'); geometric analogy problems; nonmonotonic reasoning and reasoning with default values; modelling the function of a Bayesian network.
[ { "version": "v1", "created": "Fri, 4 Jul 2003 16:34:45 GMT" }, { "version": "v2", "created": "Sun, 6 Jul 2003 15:15:41 GMT" } ]
1,179,878,400,000
[ [ "Wolff", "J Gerard", "" ] ]
cs/0307025
J. G. Wolff
J Gerard Wolff
Information Compression by Multiple Alignment, Unification and Search as a Unifying Principle in Computing and Cognition
null
Artificial Intelligence Review 19(3), 193-230, 2003
null
null
cs.AI
null
This article presents an overview of the idea that "information compression by multiple alignment, unification and search" (ICMAUS) may serve as a unifying principle in computing (including mathematics and logic) and in such aspects of human cognition as the analysis and production of natural language, fuzzy pattern recognition and best-match information retrieval, concept hierarchies with inheritance of attributes, probabilistic reasoning, and unsupervised inductive learning. The ICMAUS concepts are described together with an outline of the SP61 software model in which the ICMAUS concepts are currently realised. A range of examples is presented, illustrated with output from the SP61 model.
[ { "version": "v1", "created": "Thu, 10 Jul 2003 15:32:31 GMT" } ]
1,179,878,400,000
[ [ "Wolff", "J Gerard", "" ] ]
cs/0307048
Amar Isli
Amar Isli
Integrating cardinal direction relations and other orientation relations in Qualitative Spatial Reasoning
Includes new material, such as a section on the use of the work in the concrete domain of the ALC(D) spatio-temporalisation defined in http://arXiv.org/abs/cs.AI/0307040
null
null
Technical report FBI-HH-M-304/01, Fachbereich Informatik, Universitaet Hamburg
cs.AI
null
We propose a calculus integrating two calculi well-known in Qualitative Spatial Reasoning (QSR): Frank's projection-based cardinal direction calculus, and a coarser version of Freksa's relative orientation calculus. An original constraint propagation procedure is presented, which implements the interaction between the two integrated calculi. The importance of taking into account the interaction is shown with a real example providing an inconsistent knowledge base, whose inconsistency (a) cannot be detected by reasoning separately about each of the two components of the knowledge, just because, taken separately, each is consistent, but (b) is detected by the proposed algorithm, thanks to the interaction knowledge propagated from each of the two compnents to the other.
[ { "version": "v1", "created": "Mon, 21 Jul 2003 13:03:19 GMT" }, { "version": "v2", "created": "Tue, 5 Oct 2004 15:45:31 GMT" } ]
1,179,878,400,000
[ [ "Isli", "Amar", "" ] ]
cs/0307050
Amar Isli
Amar Isli
A ternary Relation Algebra of directed lines
60 pages. Submitted. Technical report mentioned in "Report-no" below is an earlier version of the work, and its title differs slightly (Reasoning about relative position of directed lines as a ternary Relation Algebra (RA): presentation of the RA and of its use in the concrete domain of an ALC(D)-like description logic)
null
null
Technical report FBI-HH-M-313/02, Fachbereich Informatik, Universitaet Hamburg
cs.AI
null
We define a ternary Relation Algebra (RA) of relative position relations on two-dimensional directed lines (d-lines for short). A d-line has two degrees of freedom (DFs): a rotational DF (RDF), and a translational DF (TDF). The representation of the RDF of a d-line will be handled by an RA of 2D orientations, CYC_t, known in the literature. A second algebra, TA_t, which will handle the TDF of a d-line, will be defined. The two algebras, CYC_t and TA_t, will constitute, respectively, the translational and the rotational components of the RA, PA_t, of relative position relations on d-lines: the PA_t atoms will consist of those pairs <t,r> of a TA_t atom and a CYC_t atom that are compatible. We present in detail the RA PA_t, with its converse table, its rotation table and its composition tables. We show that a (polynomial) constraint propagation algorithm, known in the literature, is complete for a subset of PA_t relations including almost all of the atomic relations. We will discuss the application scope of the RA, which includes incidence geometry, GIS (Geographic Information Systems), shape representation, localisation in (multi-)robot navigation, and the representation of motion prepositions in NLP (Natural Language Processing). We then compare the RA to existing ones, such as an algebra for reasoning about rectangles parallel to the axes of an (orthogonal) coordinate system, a ``spatial Odyssey'' of Allen's interval algebra, and an algebra for reasoning about 2D segments.
[ { "version": "v1", "created": "Mon, 21 Jul 2003 16:01:11 GMT" } ]
1,179,878,400,000
[ [ "Isli", "Amar", "" ] ]
cs/0307056
Joseph Y. Halpern
Fahiem Bacchus, Adam Grove, Joseph Y. Halpern, and Daphne Koller
From Statistical Knowledge Bases to Degrees of Belief
null
Artificial Intelligence 87:1-2, 1996, pp. 75-143
null
null
cs.AI
null
An intelligent agent will often be uncertain about various properties of its environment, and when acting in that environment it will frequently need to quantify its uncertainty. For example, if the agent wishes to employ the expected-utility paradigm of decision theory to guide its actions, it will need to assign degrees of belief (subjective probabilities) to various assertions. Of course, these degrees of belief should not be arbitrary, but rather should be based on the information available to the agent. This paper describes one approach for inducing degrees of belief from very rich knowledge bases, that can include information about particular individuals, statistical correlations, physical laws, and default rules. We call our approach the random-worlds method. The method is based on the principle of indifference: it treats all of the worlds the agent considers possible as being equally likely. It is able to integrate qualitative default reasoning with quantitative probabilistic reasoning by providing a language in which both types of information can be easily expressed. Our results show that a number of desiderata that arise in direct inference (reasoning from statistical information to conclusions about individuals) and default reasoning follow directly {from} the semantics of random worlds. For example, random worlds captures important patterns of reasoning such as specificity, inheritance, indifference to irrelevant information, and default assumptions of independence. Furthermore, the expressive power of the language used and the intuitive semantics of random worlds allow the method to deal with problems that are beyond the scope of many other non-deductive reasoning systems.
[ { "version": "v1", "created": "Thu, 24 Jul 2003 21:32:09 GMT" } ]
1,179,878,400,000
[ [ "Bacchus", "Fahiem", "" ], [ "Grove", "Adam", "" ], [ "Halpern", "Joseph Y.", "" ], [ "Koller", "Daphne", "" ] ]
cs/0307063
J. G. Wolff
J Gerard Wolff
An Alternative to RDF-Based Languages for the Representation and Processing of Ontologies in the Semantic Web
null
null
null
null
cs.AI
null
This paper describes an approach to the representation and processing of ontologies in the Semantic Web, based on the ICMAUS theory of computation and AI. This approach has strengths that complement those of languages based on the Resource Description Framework (RDF) such as RDF Schema and DAML+OIL. The main benefits of the ICMAUS approach are simplicity and comprehensibility in the representation of ontologies, an ability to cope with errors and uncertainties in knowledge, and a versatile reasoning system with capabilities in the kinds of probabilistic reasoning that seem to be required in the Semantic Web.
[ { "version": "v1", "created": "Tue, 29 Jul 2003 11:13:41 GMT" } ]
1,179,878,400,000
[ [ "Wolff", "J Gerard", "" ] ]
cs/0308002
Aleks Jakulin
Aleks Jakulin and Ivan Bratko
Quantifying and Visualizing Attribute Interactions
30 pages, 11 figures. Changes from v2: improved bibliography
null
null
null
cs.AI
null
Interactions are patterns between several attributes in data that cannot be inferred from any subset of these attributes. While mutual information is a well-established approach to evaluating the interactions between two attributes, we surveyed its generalizations as to quantify interactions between several attributes. We have chosen McGill's interaction information, which has been independently rediscovered a number of times under various names in various disciplines, because of its many intuitively appealing properties. We apply interaction information to visually present the most important interactions of the data. Visualization of interactions has provided insight into the structure of data on a number of domains, identifying redundant attributes and opportunities for constructing new features, discovering unexpected regularities in data, and have helped during construction of predictive models; we illustrate the methods on numerous examples. A machine learning method that disregards interactions may get caught in two traps: myopia is caused by learning algorithms assuming independence in spite of interactions, whereas fragmentation arises from assuming an interaction in spite of independence.
[ { "version": "v1", "created": "Fri, 1 Aug 2003 10:50:07 GMT" }, { "version": "v2", "created": "Tue, 11 Nov 2003 13:07:12 GMT" }, { "version": "v3", "created": "Tue, 2 Mar 2004 12:57:55 GMT" } ]
1,179,878,400,000
[ [ "Jakulin", "Aleks", "" ], [ "Bratko", "Ivan", "" ] ]
cs/0309025
Johan Schubert
Johan Schubert
Evidential Force Aggregation
7 pages, 2 figures
in Proceedings of the Sixth International Conference on Information Fusion (FUSION 2003), pp. 1223-1229, Cairns, Australia, 8-11 July 2003, International Society of Information Fusion, 2003
null
FOI-S-0960-SE
cs.AI
null
In this paper we develop an evidential force aggregation method intended for classification of evidential intelligence into recognized force structures. We assume that the intelligence has already been partitioned into clusters and use the classification method individually in each cluster. The classification is based on a measure of fitness between template and fused intelligence that makes it possible to handle intelligence reports with multiple nonspecific and uncertain propositions. With this measure we can aggregate on a level-by-level basis, starting from general intelligence to achieve a complete force structure with recognized units on all hierarchical levels.
[ { "version": "v1", "created": "Mon, 15 Sep 2003 07:20:48 GMT" } ]
1,179,878,400,000
[ [ "Schubert", "Johan", "" ] ]
cs/0310023
Igor Bocharov
Igor Bocharov, Pavel Lukin
Application of Kullback-Leibler Metric to Speech Recognition
10 pages, 4 figures, Word to PDF auto converted
null
null
null
cs.AI
null
Article discusses the application of Kullback-Leibler divergence to the recognition of speech signals and suggests three algorithms implementing this divergence criterion: correlation algorithm, spectral algorithm and filter algorithm. Discussion covers an approach to the problem of speech variability and is illustrated with the results of experimental modeling of speech signals. The article gives a number of recommendations on the choice of appropriate model parameters and provides a comparison to some other methods of speech recognition.
[ { "version": "v1", "created": "Mon, 13 Oct 2003 16:17:51 GMT" } ]
1,179,878,400,000
[ [ "Bocharov", "Igor", "" ], [ "Lukin", "Pavel", "" ] ]
cs/0310044
Ali Abbas E.
Ali E. Abbas
The Algebra of Utility Inference
15 pages
null
null
null
cs.AI
null
Richard Cox [1] set the axiomatic foundations of probable inference and the algebra of propositions. He showed that consistency within these axioms requires certain rules for updating belief. In this paper we use the analogy between probability and utility introduced in [2] to propose an axiomatic foundation for utility inference and the algebra of preferences. We show that consistency within these axioms requires certain rules for updating preference. We discuss a class of utility functions that stems from the axioms of utility inference and show that this class is the basic building block for any general multiattribute utility function. We use this class of utility functions together with the algebra of preferences to construct utility functions represented by logical operations on the attributes.
[ { "version": "v1", "created": "Thu, 23 Oct 2003 01:13:20 GMT" } ]
1,179,878,400,000
[ [ "Abbas", "Ali E.", "" ] ]
cs/0310045
Ali Abbas E.
Ali E. Abbas
An information theory for preferences
null
null
10.1063/1.1751362
null
cs.AI
null
Recent literature in the last Maximum Entropy workshop introduced an analogy between cumulative probability distributions and normalized utility functions. Based on this analogy, a utility density function can de defined as the derivative of a normalized utility function. A utility density function is non-negative and integrates to unity. These two properties form the basis of a correspondence between utility and probability. A natural application of this analogy is a maximum entropy principle to assign maximum entropy utility values. Maximum entropy utility interprets many of the common utility functions based on the preference information needed for their assignment, and helps assign utility values based on partial preference information. This paper reviews maximum entropy utility and introduces further results that stem from the duality between probability and utility.
[ { "version": "v1", "created": "Thu, 23 Oct 2003 01:34:44 GMT" } ]
1,257,811,200,000
[ [ "Abbas", "Ali E.", "" ] ]
cs/0310047
Simona Perri
Simona Perri, Francesco Scarcello, Nicola Leone
Abductive Logic Programs with Penalization: Semantics, Complexity and Implementation
36 pages; will be published in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
Abduction, first proposed in the setting of classical logics, has been studied with growing interest in the logic programming area during the last years. In this paper we study abduction with penalization in the logic programming framework. This form of abductive reasoning, which has not been previously analyzed in logic programming, turns out to represent several relevant problems, including optimization problems, very naturally. We define a formal model for abduction with penalization over logic programs, which extends the abductive framework proposed by Kakas and Mancarella. We address knowledge representation issues, encoding a number of problems in our abductive framework. In particular, we consider some relevant problems, taken from different domains, ranging from optimization theory to diagnosis and planning; their encodings turn out to be simple and elegant in our formalism. We thoroughly analyze the computational complexity of the main problems arising in the context of abduction with penalization from logic programs. Finally, we implement a system supporting the proposed abductive framework on top of the DLV engine. To this end, we design a translation from abduction problems with penalties into logic programs with weak constraints. We prove that this approach is sound and complete.
[ { "version": "v1", "created": "Fri, 24 Oct 2003 18:03:06 GMT" } ]
1,179,878,400,000
[ [ "Perri", "Simona", "" ], [ "Scarcello", "Francesco", "" ], [ "Leone", "Nicola", "" ] ]
cs/0310061
Miroslaw Truszczynski
Lengning Liu, Miroslaw Truszczynski
Local-search techniques for propositional logic extended with cardinality constraints
Proceedings of the 9th International Conference on Pronciples and Practice of Constraint Programming - CP 2003, LNCS 2833, pp. 495-509
null
null
null
cs.AI
null
We study local-search satisfiability solvers for propositional logic extended with cardinality atoms, that is, expressions that provide explicit ways to model constraints on cardinalities of sets. Adding cardinality atoms to the language of propositional logic facilitates modeling search problems and often results in concise encodings. We propose two ``native'' local-search solvers for theories in the extended language. We also describe techniques to reduce the problem to standard propositional satisfiability and allow us to use off-the-shelf SAT solvers. We study these methods experimentally. Our general finding is that native solvers designed specifically for the extended language perform better than indirect methods relying on SAT solvers.
[ { "version": "v1", "created": "Fri, 31 Oct 2003 16:29:02 GMT" } ]
1,179,878,400,000
[ [ "Liu", "Lengning", "" ], [ "Truszczynski", "Miroslaw", "" ] ]
cs/0310062
Miroslaw Truszczynski
Lengning Liu, Miroslaw Truszczynski
WSAT(cc) - a fast local-search ASP solver
Proceedings of LPNMR-03 (7th International Conference), LNCS, Springer Verlag
null
null
null
cs.AI
null
We describe WSAT(cc), a local-search solver for computing models of theories in the language of propositional logic extended by cardinality atoms. WSAT(cc) is a processing back-end for the logic PS+, a recently proposed formalism for answer-set programming.
[ { "version": "v1", "created": "Fri, 31 Oct 2003 16:46:07 GMT" } ]
1,179,878,400,000
[ [ "Liu", "Lengning", "" ], [ "Truszczynski", "Miroslaw", "" ] ]
cs/0311004
Ali Abbas E.
Ali Abbas, Jim Matheson
Utility-Probability Duality
null
null
null
null
cs.AI
null
This paper presents duality between probability distributions and utility functions.
[ { "version": "v1", "created": "Thu, 6 Nov 2003 07:33:23 GMT" } ]
1,179,878,400,000
[ [ "Abbas", "Ali", "" ], [ "Matheson", "Jim", "" ] ]
cs/0311007
Simona Perri
Simona Perri, Nicola Leone
Parametric Connectives in Disjunctive Logic Programming
null
null
null
null
cs.AI
null
Disjunctive Logic Programming (\DLP) is an advanced formalism for Knowledge Representation and Reasoning (KRR). \DLP is very expressive in a precise mathematical sense: it allows to express every property of finite structures that is decidable in the complexity class $\SigmaP{2}$ ($\NP^{\NP}$). Importantly, the \DLP encodings are often simple and natural. In this paper, we single out some limitations of \DLP for KRR, which cannot naturally express problems where the size of the disjunction is not known ``a priori'' (like N-Coloring), but it is part of the input. To overcome these limitations, we further enhance the knowledge modelling abilities of \DLP, by extending this language by {\em Parametric Connectives (OR and AND)}. These connectives allow us to represent compactly the disjunction/conjunction of a set of atoms having a given property. We formally define the semantics of the new language, named $DLP^{\bigvee,\bigwedge}$ and we show the usefulness of the new constructs on relevant knowledge-based problems. We address implementation issues and discuss related works.
[ { "version": "v1", "created": "Fri, 7 Nov 2003 15:57:07 GMT" } ]
1,179,878,400,000
[ [ "Perri", "Simona", "" ], [ "Leone", "Nicola", "" ] ]
cs/0311024
Viviana Mascardi
Viviana Mascardi, Maurizio Martelli, Leon Sterling
Logic-Based Specification Languages for Intelligent Software Agents
67 pages, 1 table, 1 figure. Accepted for publication by the Journal "Theory and Practice of Logic Programming", volume 4, Maurice Bruynooghe Editor-in-Chief
null
null
null
cs.AI
null
The research field of Agent-Oriented Software Engineering (AOSE) aims to find abstractions, languages, methodologies and toolkits for modeling, verifying, validating and prototyping complex applications conceptualized as Multiagent Systems (MASs). A very lively research sub-field studies how formal methods can be used for AOSE. This paper presents a detailed survey of six logic-based executable agent specification languages that have been chosen for their potential to be integrated in our ARPEGGIO project, an open framework for specifying and prototyping a MAS. The six languages are ConGoLog, Agent-0, the IMPACT agent programming language, DyLog, Concurrent METATEM and Ehhf. For each executable language, the logic foundations are described and an example of use is shown. A comparison of the six languages and a survey of similar approaches complete the paper, together with considerations of the advantages of using logic-based languages in MAS modeling and prototyping.
[ { "version": "v1", "created": "Thu, 20 Nov 2003 10:10:25 GMT" } ]
1,179,878,400,000
[ [ "Mascardi", "Viviana", "" ], [ "Martelli", "Maurizio", "" ], [ "Sterling", "Leon", "" ] ]
cs/0311026
Joseph Y. Halpern
Francis C. Chu, Joseph Y. Halpern
Great Expectations. Part I: On the Customizability of Generalized Expected Utility
Preliminary version appears in Proc. 18th International Joint Conference on AI (IJCAI), 2003, pp. 291-296
null
null
null
cs.AI
null
We propose a generalization of expected utility that we call generalized EU (GEU), where a decision maker's beliefs are represented by plausibility measures, and the decision maker's tastes are represented by general (i.e.,not necessarily real-valued) utility functions. We show that every agent, ``rational'' or not, can be modeled as a GEU maximizer. We then show that we can customize GEU by selectively imposing just the constraints we want. In particular, we show how each of Savage's postulates corresponds to constraints on GEU.
[ { "version": "v1", "created": "Thu, 20 Nov 2003 17:36:53 GMT" } ]
1,179,878,400,000
[ [ "Chu", "Francis C.", "" ], [ "Halpern", "Joseph Y.", "" ] ]
cs/0311027
Joseph Y. Halpern
Francis C. Chu, Joseph Y. Halpern
Great Expectations. Part II: Generalized Expected Utility as a Universal Decision Rule
Preliminary version appears in Proc. 18th International Joint Conference on AI (IJCAI), 2003, pp. 297-302
null
null
null
cs.AI
null
Many different rules for decision making have been introduced in the literature. We show that a notion of generalized expected utility proposed in Part I of this paper is a universal decision rule, in the sense that it can represent essentially all other decision rules.
[ { "version": "v1", "created": "Thu, 20 Nov 2003 17:39:22 GMT" } ]
1,179,878,400,000
[ [ "Chu", "Francis C.", "" ], [ "Halpern", "Joseph Y.", "" ] ]
cs/0311045
J. G. Wolff
J Gerard Wolff
Unsupervised Grammar Induction in a Framework of Information Compression by Multiple Alignment, Unification and Search
null
Proceedings of the Workshop and Tutorial on Learning Context-Free Grammars (in association with the 14th European Conference on Machine Learning and the 7th European Conference on Principles and Practice of Knowledge Discovery in Databases (ECML/PKDD 2003), September 2003, Cavtat-Dubrovnik, Croata), editors: C. de la Higuera and P. Adriaans and M. van Zaanen and J. Oncina, pp 113-124
null
null
cs.AI
null
This paper describes a novel approach to grammar induction that has been developed within a framework designed to integrate learning with other aspects of computing, AI, mathematics and logic. This framework, called "information compression by multiple alignment, unification and search" (ICMAUS), is founded on principles of Minimum Length Encoding pioneered by Solomonoff and others. Most of the paper describes SP70, a computer model of the ICMAUS framework that incorporates processes for unsupervised learning of grammars. An example is presented to show how the model can infer a plausible grammar from appropriate input. Limitations of the current model and how they may be overcome are briefly discussed.
[ { "version": "v1", "created": "Thu, 27 Nov 2003 11:18:59 GMT" } ]
1,179,878,400,000
[ [ "Wolff", "J Gerard", "" ] ]
cs/0311051
Amar Isli
Amar Isli
Integrating existing cone-shaped and projection-based cardinal direction relations and a TCSP-like decidable generalisation
I should be able to provide a longer version soon. A shorter version has been submitted to the conference KR'2004
null
null
null
cs.AI
null
We consider the integration of existing cone-shaped and projection-based calculi of cardinal direction relations, well-known in QSR. The more general, integrating language we consider is based on convex constraints of the qualitative form $r(x,y)$, $r$ being a cone-shaped or projection-based cardinal direction atomic relation, or of the quantitative form $(\alpha ,\beta)(x,y)$, with $\alpha ,\beta\in [0,2\pi)$ and $(\beta -\alpha)\in [0,\pi ]$: the meaning of the quantitative constraint, in particular, is that point $x$ belongs to the (convex) cone-shaped area rooted at $y$, and bounded by angles $\alpha$ and $\beta$. The general form of a constraint is a disjunction of the form $[r_1\vee...\vee r_{n_1}\vee (\alpha_1,\beta_1)\vee...\vee (\alpha _{n_2},\beta_{n_2})](x,y)$, with $r_i(x,y)$, $i=1... n_1$, and $(\alpha _i,\beta_i)(x,y)$, $i=1... n_2$, being convex constraints as described above: the meaning of such a general constraint is that, for some $i=1... n_1$, $r_i(x,y)$ holds, or, for some $i=1... n_2$, $(\alpha_i,\beta_i)(x,y)$ holds. A conjunction of such general constraints is a $\tcsp$-like CSP, which we will refer to as an $\scsp$ (Spatial Constraint Satisfaction Problem). An effective solution search algorithm for an $\scsp$ will be described, which uses (1) constraint propagation, based on a composition operation to be defined, as the filtering method during the search, and (2) the Simplex algorithm, guaranteeing completeness, at the leaves of the search tree. The approach is particularly suited for large-scale high-level vision, such as, e.g., satellite-like surveillance of a geographic area.
[ { "version": "v1", "created": "Fri, 28 Nov 2003 04:06:56 GMT" } ]
1,179,878,400,000
[ [ "Isli", "Amar", "" ] ]
cs/0312020
Laurent Henocque
Laurent Henocque
Modeling Object Oriented Constraint Programs in Z
null
null
null
RR-LSIS-03-006
cs.AI
null
Object oriented constraint programs (OOCPs) emerge as a leading evolution of constraint programming and artificial intelligence, first applied to a range of industrial applications called configuration problems. The rich variety of technical approaches to solving configuration problems (CLP(FD), CC(FD), DCSP, Terminological systems, constraint programs with set variables ...) is a source of difficulty. No universally accepted formal language exists for communicating about OOCPs, which makes the comparison of systems difficult. We present here a Z based specification of OOCPs which avoids the falltrap of hidden object semantics. The object system is part of the specification, and captures all of the most advanced notions from the object oriented modeling standard UML. The paper illustrates these issues and the conciseness and precision of Z by the specification of a working OOCP that solves an historical AI problem : parsing a context free grammar. Being written in Z, an OOCP specification also supports formal proofs. The whole builds the foundation of an adaptative and evolving framework for communicating about constrained object models and programs.
[ { "version": "v1", "created": "Fri, 12 Dec 2003 10:15:38 GMT" } ]
1,179,878,400,000
[ [ "Henocque", "Laurent", "" ] ]
cs/0312040
Marcello Balduccini
Marcello Balduccini and Michael Gelfond
Diagnostic reasoning with A-Prolog
46 pages, 1 Postscript figure
TPLP Vol 3(4&5) (2003) 425-461
null
null
cs.AI
null
In this paper we suggest an architecture for a software agent which operates a physical device and is capable of making observations and of testing and repairing the device's components. We present simplified definitions of the notions of symptom, candidate diagnosis, and diagnosis which are based on the theory of action language ${\cal AL}$. The definitions allow one to give a simple account of the agent's behavior in which many of the agent's tasks are reduced to computing stable models of logic programs.
[ { "version": "v1", "created": "Thu, 18 Dec 2003 13:38:49 GMT" } ]
1,179,878,400,000
[ [ "Balduccini", "Marcello", "" ], [ "Gelfond", "Michael", "" ] ]
cs/0312045
Paolo Ferraris
Paolo Ferraris and Vladimir Lifschitz
Weight Constraints as Nested Expressions
To appear in Theory and Practice of Logic Programming
null
null
null
cs.AI
null
We compare two recent extensions of the answer set (stable model) semantics of logic programs. One of them, due to Lifschitz, Tang and Turner, allows the bodies and heads of rules to contain nested expressions. The other, due to Niemela and Simons, uses weight constraints. We show that there is a simple, modular translation from the language of weight constraints into the language of nested expressions that preserves the program's answer sets. Nested expressions can be eliminated from the result of this translation in favor of additional atoms. The translation makes it possible to compute answer sets for some programs with weight constraints using satisfiability solvers, and to prove the strong equivalence of programs with weight constraints using the logic of here-and there.
[ { "version": "v1", "created": "Fri, 19 Dec 2003 16:00:43 GMT" } ]
1,179,878,400,000
[ [ "Ferraris", "Paolo", "" ], [ "Lifschitz", "Vladimir", "" ] ]
cs/0312053
Victor Marek
Victor W. Marek and Jeffrey B. Remmel
On the Expressibility of Stable Logic Programming
17 pages
TCLP 3(2003), pp. 551-567q
null
null
cs.AI
null
(We apologize for pidgin LaTeX) Schlipf \cite{sch91} proved that Stable Logic Programming (SLP) solves all $\mathit{NP}$ decision problems. We extend Schlipf's result to prove that SLP solves all search problems in the class $\mathit{NP}$. Moreover, we do this in a uniform way as defined in \cite{mt99}. Specifically, we show that there is a single $\mathrm{DATALOG}^{\neg}$ program $P_{\mathit{Trg}}$ such that given any Turing machine $M$, any polynomial $p$ with non-negative integer coefficients and any input $\sigma$ of size $n$ over a fixed alphabet $\Sigma$, there is an extensional database $\mathit{edb}_{M,p,\sigma}$ such that there is a one-to-one correspondence between the stable models of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ and the accepting computations of the machine $M$ that reach the final state in at most $p(n)$ steps. Moreover, $\mathit{edb}_{M,p,\sigma}$ can be computed in polynomial time from $p$, $\sigma$ and the description of $M$ and the decoding of such accepting computations from its corresponding stable model of $\mathit{edb}_{M,p,\sigma} \cup P_{\mathit{Trg}}$ can be computed in linear time. A similar statement holds for Default Logic with respect to $\Sigma_2^\mathrm{P}$-search problems\footnote{The proof of this result involves additional technical complications and will be a subject of another publication.}.
[ { "version": "v1", "created": "Mon, 22 Dec 2003 18:34:21 GMT" } ]
1,179,878,400,000
[ [ "Marek", "Victor W.", "" ], [ "Remmel", "Jeffrey B.", "" ] ]
cs/0401009
J. G. Wolff
J Gerard Wolff
Unifying Computing and Cognition: The SP Theory and its Applications
null
null
null
null
cs.AI
null
This book develops the conjecture that all kinds of information processing in computers and in brains may usefully be understood as "information compression by multiple alignment, unification and search". This "SP theory", which has been under development since 1987, provides a unified view of such things as the workings of a universal Turing machine, the nature of 'knowledge', the interpretation and production of natural language, pattern recognition and best-match information retrieval, several kinds of probabilistic reasoning, planning and problem solving, unsupervised learning, and a range of concepts in mathematics and logic. The theory also provides a basis for the design of an 'SP' computer with several potential advantages compared with traditional digital computers.
[ { "version": "v1", "created": "Tue, 13 Jan 2004 16:16:07 GMT" } ]
1,179,878,400,000
[ [ "Wolff", "J Gerard", "" ] ]
cs/0402033
Fangzhen Lin
Fangzhen Lin and Jia-Huai You
Recycling Computed Answers in Rewrite Systems for Abduction
20 pages. Full version of our IJCAI-03 paper
null
null
null
cs.AI
null
In rule-based systems, goal-oriented computations correspond naturally to the possible ways that an observation may be explained. In some applications, we need to compute explanations for a series of observations with the same domain. The question whether previously computed answers can be recycled arises. A yes answer could result in substantial savings of repeated computations. For systems based on classic logic, the answer is YES. For nonmonotonic systems however, one tends to believe that the answer should be NO, since recycling is a form of adding information. In this paper, we show that computed answers can always be recycled, in a nontrivial way, for the class of rewrite procedures that we proposed earlier for logic programs with negation. We present some experimental results on an encoding of the logistics domain.
[ { "version": "v1", "created": "Mon, 16 Feb 2004 06:15:05 GMT" } ]
1,179,878,400,000
[ [ "Lin", "Fangzhen", "" ], [ "You", "Jia-Huai", "" ] ]
cs/0402035
Chauvet J.-M.
Jean-Marie Chauvet
Memory As A Monadic Control Construct In Problem-Solving
null
null
null
ND-2004-1
cs.AI
null
Recent advances in programming languages study and design have established a standard way of grounding computational systems representation in category theory. These formal results led to a better understanding of issues of control and side-effects in functional and imperative languages. This framework can be successfully applied to the investigation of the performance of Artificial Intelligence (AI) inference and cognitive systems. In this paper, we delineate a categorical formalisation of memory as a control structure driving performance in inference systems. Abstracting away control mechanisms from three widely used representations of memory in cognitive systems (scripts, production rules and clusters) we explain how categorical triples capture the interaction between learning and problem-solving.
[ { "version": "v1", "created": "Mon, 16 Feb 2004 17:02:29 GMT" } ]
1,254,182,400,000
[ [ "Chauvet", "Jean-Marie", "" ] ]
cs/0402057
Carlos Ches\nevar
Sergio Alejandro Gomez and Carlos Ivan Ches\~nevar
Integrating Defeasible Argumentation and Machine Learning Techniques
5 pages
Procs. WICC 2003 . Pp. 787-791. Tandil, Argentina, Mayo 2003
null
null
cs.AI
null
The field of machine learning (ML) is concerned with the question of how to construct algorithms that automatically improve with experience. In recent years many successful ML applications have been developed, such as datamining programs, information-filtering systems, etc. Although ML algorithms allow the detection and extraction of interesting patterns of data for several kinds of problems, most of these algorithms are based on quantitative reasoning, as they rely on training data in order to infer so-called target functions. In the last years defeasible argumentation has proven to be a sound setting to formalize common-sense qualitative reasoning. This approach can be combined with other inference techniques, such as those provided by machine learning theory. In this paper we outline different alternatives for combining defeasible argumentation and machine learning techniques. We suggest how different aspects of a generic argument-based framework can be integrated with other ML-based approaches.
[ { "version": "v1", "created": "Wed, 25 Feb 2004 18:02:29 GMT" }, { "version": "v2", "created": "Fri, 28 May 2004 17:04:36 GMT" } ]
1,179,878,400,000
[ [ "Gomez", "Sergio Alejandro", "" ], [ "Chesñevar", "Carlos Ivan", "" ] ]
cs/0403002
Umberto Straccia
Y. Loyer and U. Straccia
Epistemic Foundation of Stable Model Semantics
41 pages. To appear in Theory and Practice of Logic Programming (TPLP)
null
null
null
cs.AI
null
Stable model semantics has become a very popular approach for the management of negation in logic programming. This approach relies mainly on the closed world assumption to complete the available knowledge and its formulation has its basis in the so-called Gelfond-Lifschitz transformation. The primary goal of this work is to present an alternative and epistemic-based characterization of stable model semantics, to the Gelfond-Lifschitz transformation. In particular, we show that stable model semantics can be defined entirely as an extension of the Kripke-Kleene semantics. Indeed, we show that the closed world assumption can be seen as an additional source of `falsehood' to be added cumulatively to the Kripke-Kleene semantics. Our approach is purely algebraic and can abstract from the particular formalism of choice as it is based on monotone operators (under the knowledge order) over bilattices only.
[ { "version": "v1", "created": "Tue, 2 Mar 2004 15:45:29 GMT" }, { "version": "v2", "created": "Wed, 22 Jun 2005 14:11:11 GMT" } ]
1,472,601,600,000
[ [ "Loyer", "Y.", "" ], [ "Straccia", "U.", "" ] ]
cs/0403006
Carlos Gershenson
Carlos R. de la Mora B., Carlos Gershenson, Angelica Garcia-Vega
The role of behavior modifiers in representation development
8 pages
null
null
null
cs.AI
null
We address the problem of the development of representations and their relationship to the environment. We study a software agent which develops in a network a representation of its simple environment which captures and integrates the relationships between agent and environment through a closure mechanism. The inclusion of a variable behavior modifier allows better representation development. This can be confirmed with an internal description of the closure mechanism, and with an external description of the properties of the representation network.
[ { "version": "v1", "created": "Fri, 5 Mar 2004 12:53:57 GMT" } ]
1,179,878,400,000
[ [ "B.", "Carlos R. de la Mora", "" ], [ "Gershenson", "Carlos", "" ], [ "Garcia-Vega", "Angelica", "" ] ]
cs/0404011
Francesco Calimeri
G. Ianni, F. Calimeri, A. Pietramala, M.C. Santoro
Parametric external predicates for the DLV System
10 pages
null
null
null
cs.AI
null
This document describes syntax, semantics and implementation guidelines in order to enrich the DLV system with the possibility to make external C function calls. This feature is realized by the introduction of parametric external predicates, whose extension is not specified through a logic program but implicitly computed through external code.
[ { "version": "v1", "created": "Mon, 5 Apr 2004 17:15:45 GMT" } ]
1,179,878,400,000
[ [ "Ianni", "G.", "" ], [ "Calimeri", "F.", "" ], [ "Pietramala", "A.", "" ], [ "Santoro", "M. C.", "" ] ]
cs/0404012
Francesco Calimeri
Francesco Calimeri, Nicola Leone
Toward the Implementation of Functions in the DLV System (Preliminary Technical Report)
7 pages
null
null
null
cs.AI
null
This document describes the functions as they are treated in the DLV system. We give first the language, then specify the main implementation issues.
[ { "version": "v1", "created": "Mon, 5 Apr 2004 17:23:07 GMT" } ]
1,179,878,400,000
[ [ "Calimeri", "Francesco", "" ], [ "Leone", "Nicola", "" ] ]
cs/0404051
Jiang Qiu
Jorge Lobo, Gisela Mendez, Stuart R. Taylor
Knowledge And The Action Description Language A
Appeared in Theory and Practice of Logic Programming, vol. 1, no. 2, 2001
Theory and Practice of Logic Programming, vol. 1, no. 2, 2001
null
null
cs.AI
null
We introduce Ak, an extension of the action description language A (Gelfond and Lifschitz, 1993) to handle actions which affect knowledge. We use sensing actions to increase an agent's knowledge of the world and non-deterministic actions to remove knowledge. We include complex plans involving conditionals and loops in our query language for hypothetical reasoning. We also present a translation of Ak domain descriptions into epistemic logic programs.
[ { "version": "v1", "created": "Sat, 24 Apr 2004 14:16:04 GMT" } ]
1,179,878,400,000
[ [ "Lobo", "Jorge", "" ], [ "Mendez", "Gisela", "" ], [ "Taylor", "Stuart R.", "" ] ]
cs/0405008
Ajith Abraham
Ravi Jain and Ajith Abraham
A Comparative Study of Fuzzy Classification Methods on Breast Cancer Data
null
Australiasian Physical And Engineering Sciences in Medicine, Australia, 2004 (forth coming)
null
null
cs.AI
null
In this paper, we examine the performance of four fuzzy rule generation methods on Wisconsin breast cancer data. The first method generates fuzzy if then rules using the mean and the standard deviation of attribute values. The second approach generates fuzzy if then rules using the histogram of attributes values. The third procedure generates fuzzy if then rules with certainty of each attribute into homogeneous fuzzy sets. In the fourth approach, only overlapping areas are partitioned. The first two approaches generate a single fuzzy if then rule for each class by specifying the membership function of each antecedent fuzzy set using the information about attribute values of training patterns. The other two approaches are based on fuzzy grids with homogeneous fuzzy partitions of each attribute. The performance of each approach is evaluated on breast cancer data sets. Simulation results show that the Modified grid approach has a high classification rate of 99.73 %.
[ { "version": "v1", "created": "Tue, 4 May 2004 23:02:53 GMT" } ]
1,179,878,400,000
[ [ "Jain", "Ravi", "" ], [ "Abraham", "Ajith", "" ] ]