id
stringlengths
9
10
submitter
stringlengths
5
47
authors
stringlengths
5
1.72k
title
stringlengths
11
234
comments
stringlengths
1
491
journal-ref
stringlengths
4
396
doi
stringlengths
13
97
report-no
stringlengths
4
138
categories
stringclasses
1 value
license
stringclasses
9 values
abstract
stringlengths
29
3.66k
versions
listlengths
1
21
update_date
int64
1,180B
1,718B
authors_parsed
sequencelengths
1
98
1109.5714
N. Samaras
N. Samaras, K. Stergiou
Binary Encodings of Non-binary Constraint Satisfaction Problems: Algorithms and Experimental Results
null
Journal Of Artificial Intelligence Research, Volume 24, pages 641-684, 2005
10.1613/jair.1776
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A non-binary Constraint Satisfaction Problem (CSP) can be solved directly using extended versions of binary techniques. Alternatively, the non-binary problem can be translated into an equivalent binary one. In this case, it is generally accepted that the translated problem can be solved by applying well-established techniques for binary CSPs. In this paper we evaluate the applicability of the latter approach. We demonstrate that the use of standard techniques for binary CSPs in the encodings of non-binary problems is problematic and results in models that are very rarely competitive with the non-binary representation. To overcome this, we propose specialized arc consistency and search algorithms for binary encodings, and we evaluate them theoretically and empirically. We consider three binary representations; the hidden variable encoding, the dual encoding, and the double encoding. Theoretical and empirical results show that, for certain classes of non-binary constraints, binary encodings are a competitive option, and in many cases, a better one than the non-binary representation.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 20:23:01 GMT" } ]
1,317,168,000,000
[ [ "Samaras", "N.", "" ], [ "Stergiou", "K.", "" ] ]
1109.5716
P. Adjiman
P. Adjiman, P. Chatalic, F. Goasdoue, M. C. Rousset, L. Simon
Distributed Reasoning in a Peer-to-Peer Setting: Application to the Semantic Web
null
Journal Of Artificial Intelligence Research, Volume 25, pages 269-314, 2006
10.1613/jair.1785
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a peer-to-peer inference system, each peer can reason locally but can also solicit some of its acquaintances, which are peers sharing part of its vocabulary. In this paper, we consider peer-to-peer inference systems in which the local theory of each peer is a set of propositional clauses defined upon a local vocabulary. An important characteristic of peer-to-peer inference systems is that the global theory (the union of all peer theories) is not known (as opposed to partition-based reasoning systems). The main contribution of this paper is to provide the first consequence finding algorithm in a peer-to-peer setting: DeCA. It is anytime and computes consequences gradually from the solicited peer to peers that are more and more distant. We exhibit a sufficient condition on the acquaintance graph of the peer-to-peer inference system for guaranteeing the completeness of this algorithm. Another important contribution is to apply this general distributed reasoning setting to the setting of the Semantic Web through the Somewhere semantic peer-to-peer data management system. The last contribution of this paper is to provide an experimental analysis of the scalability of the peer-to-peer infrastructure that we propose, on large networks of 1000 peers.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 20:23:24 GMT" } ]
1,317,168,000,000
[ [ "Adjiman", "P.", "" ], [ "Chatalic", "P.", "" ], [ "Goasdoue", "F.", "" ], [ "Rousset", "M. C.", "" ], [ "Simon", "L.", "" ] ]
1109.5717
H. H. Hoos
H. H. Hoos, W. Pullan
Dynamic Local Search for the Maximum Clique Problem
null
Journal Of Artificial Intelligence Research, Volume 25, pages 159-185, 2006
10.1613/jair.1815
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we introduce DLS-MC, a new stochastic local search algorithm for the maximum clique problem. DLS-MC alternates between phases of iterative improvement, during which suitable vertices are added to the current clique, and plateau search, during which vertices of the current clique are swapped with vertices not contained in the current clique. The selection of vertices is solely based on vertex penalties that are dynamically adjusted during the search, and a perturbation mechanism is used to overcome search stagnation. The behaviour of DLS-MC is controlled by a single parameter, penalty delay, which controls the frequency at which vertex penalties are reduced. We show empirically that DLS-MC achieves substantial performance improvements over state-of-the-art algorithms for the maximum clique problem over a large range of the commonly used DIMACS benchmark instances.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 20:24:56 GMT" } ]
1,317,168,000,000
[ [ "Hoos", "H. H.", "" ], [ "Pullan", "W.", "" ] ]
1109.5732
G. Gutnik
G. Gutnik, G. A. Kaminka
Representing Conversations for Scalable Overhearing
null
Journal Of Artificial Intelligence Research, Volume 25, pages 349-387, 2006
10.1613/jair.1829
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open distributed multi-agent systems are gaining interest in the academic community and in industry. In such open settings, agents are often coordinated using standardized agent conversation protocols. The representation of such protocols (for analysis, validation, monitoring, etc) is an important aspect of multi-agent applications. Recently, Petri nets have been shown to be an interesting approach to such representation, and radically different approaches using Petri nets have been proposed. However, their relative strengths and weaknesses have not been examined. Moreover, their scalability and suitability for different tasks have not been addressed. This paper addresses both these challenges. First, we analyze existing Petri net representations in terms of their scalability and appropriateness for overhearing, an important task in monitoring open multi-agent systems. Then, building on the insights gained, we introduce a novel representation using Colored Petri nets that explicitly represent legal joint conversation states and messages. This representation approach offers significant improvements in scalability and is particularly suitable for overhearing. Furthermore, we show that this new representation offers a comprehensive coverage of all conversation features of FIPA conversation standards. We also present a procedure for transforming AUML conversation protocol diagrams (a standard human-readable representation), to our Colored Petri net representation.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 21:56:16 GMT" } ]
1,317,168,000,000
[ [ "Gutnik", "G.", "" ], [ "Kaminka", "G. A.", "" ] ]
1109.5750
P. Haslum
P. Haslum
Improving Heuristics Through Relaxed Search - An Analysis of TP4 and HSP*a in the 2004 Planning Competition
null
Journal Of Artificial Intelligence Research, Volume 25, pages 233-267, 2006
10.1613/jair.1885
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The hm admissible heuristics for (sequential and temporal) regression planning are defined by a parameterized relaxation of the optimal cost function in the regression search space, where the parameter m offers a trade-off between the accuracy and computational cost of theheuristic. Existing methods for computing the hm heuristic require time exponential in m, limiting them to small values (m andlt= 2). The hm heuristic can also be viewed as the optimal cost function in a relaxation of the search space: this paper presents relaxed search, a method for computing this function partially by searching in the relaxed space. The relaxed search method, because it computes hm only partially, is computationally cheaper and therefore usable for higher values of m. The (complete) hm heuristic is combined with partial hm heuristics, for m = 3,..., computed by relaxed search, resulting in a more accurate heuristic. This use of the relaxed search method to improve on the hm heuristic is evaluated by comparing two optimal temporal planners: TP4, which does not use it, and HSP*a, which uses it but is otherwise identical to TP4. The comparison is made on the domains used in the 2004 International Planning Competition, in which both planners participated. Relaxed search is found to be cost effective in some of these domains, but not all. Analysis reveals a characterization of the domains in which relaxed search can be expected to be cost effective, in terms of two measures on the original and relaxed search spaces. In the domains where relaxed search is cost effective, expanding small states is computationally cheaper than expanding large states and small states tend to have small successor states.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 00:19:20 GMT" } ]
1,317,168,000,000
[ [ "Haslum", "P.", "" ] ]
1109.5920
Emmanuel Hebrard
Diarmuid Grimes (4C UCC), Emmanuel Hebrard (LAAS)
Models and Strategies for Variants of the Job Shop Scheduling Problem
Principles and Practice of Constraint Programming - CP 2011, Perugia : Italy (2011)
null
10.1007/978-3-642-23786-7
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, a variety of constraint programming and Boolean satisfiability approaches to scheduling problems have been introduced. They have in common the use of relatively simple propagation mechanisms and an adaptive way to focus on the most constrained part of the problem. In some cases, these methods compare favorably to more classical constraint programming methods relying on propagation algorithms for global unary or cumulative resource constraints and dedicated search heuristics. In particular, we described an approach that combines restarting, with a generic adaptive heuristic and solution guided branching on a simple model based on a decomposition of disjunctive constraints. In this paper, we introduce an adaptation of this technique for an important subclass of job shop scheduling problems (JSPs), where the objective function involves minimization of earliness/tardiness costs. We further show that our technique can be improved by adding domain specific information for one variant of the JSP (involving time lag constraints). In particular we introduce a dedicated greedy heuristic, and an improved model for the case where the maximal time lag is 0 (also referred to as no-wait JSPs).
[ { "version": "v1", "created": "Tue, 27 Sep 2011 14:53:01 GMT" } ]
1,317,168,000,000
[ [ "Grimes", "Diarmuid", "", "4C UCC" ], [ "Hebrard", "Emmanuel", "", "LAAS" ] ]
1109.5951
Shane Legg Dr
Shane Legg, Joel Veness
An Approximation of the Universal Intelligence Measure
14 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Universal Intelligence Measure is a recently proposed formal definition of intelligence. It is mathematically specified, extremely general, and captures the essence of many informal definitions of intelligence. It is based on Hutter's Universal Artificial Intelligence theory, an extension of Ray Solomonoff's pioneering work on universal induction. Since the Universal Intelligence Measure is only asymptotically computable, building a practical intelligence test from it is not straightforward. This paper studies the practical issues involved in developing a real-world UIM-based performance metric. Based on our investigation, we develop a prototype implementation which we use to evaluate a number of different artificial agents.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 16:09:27 GMT" }, { "version": "v2", "created": "Thu, 29 Sep 2011 21:38:56 GMT" } ]
1,317,600,000,000
[ [ "Legg", "Shane", "" ], [ "Veness", "Joel", "" ] ]
1109.6029
S. Schroedl
S. Schroedl
An Improved Search Algorithm for Optimal Multiple-Sequence Alignment
null
Journal Of Artificial Intelligence Research, Volume 23, pages 587-623, 2005
10.1613/jair.1534
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple sequence alignment (MSA) is a ubiquitous problem in computational biology. Although it is NP-hard to find an optimal solution for an arbitrary number of sequences, due to the importance of this problem researchers are trying to push the limits of exact algorithms further. Since MSA can be cast as a classical path finding problem, it is attracting a growing number of AI researchers interested in heuristic search algorithms as a challenge with actual practical relevance. In this paper, we first review two previous, complementary lines of research. Based on Hirschbergs algorithm, Dynamic Programming needs O(kN^(k-1)) space to store both the search frontier and the nodes needed to reconstruct the solution path, for k sequences of length N. Best first search, on the other hand, has the advantage of bounding the search space that has to be explored using a heuristic. However, it is necessary to maintain all explored nodes up to the final solution in order to prevent the search from re-expanding them at higher cost. Earlier approaches to reduce the Closed list are either incompatible with pruning methods for the Open list, or must retain at least the boundary of the Closed list. In this article, we present an algorithm that attempts at combining the respective advantages; like A* it uses a heuristic for pruning the search space, but reduces both the maximum Open and Closed size to O(kN^(k-1)), as in Dynamic Programming. The underlying idea is to conduct a series of searches with successively increasing upper bounds, but using the DP ordering as the key for the Open priority queue. With a suitable choice of thresholds, in practice, a running time below four times that of A* can be expected. In our experiments we show that our algorithm outperforms one of the currently most successful algorithms for optimal multiple sequence alignments, Partial Expansion A*, both in time and memory. Moreover, we apply a refined heuristic based on optimal alignments not only of pairs of sequences, but of larger subsets. This idea is not new; however, to make it practically relevant we show that it is equally important to bound the heuristic computation appropriately, or the overhead can obliterate any possible gain. Furthermore, we discuss a number of improvements in time and space efficiency with regard to practical implementations. Our algorithm, used in conjunction with higher-dimensional heuristics, is able to calculate for the first time the optimal alignment for almost all of the problems in Reference 1 of the benchmark database BAliBASE.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 20:40:06 GMT" } ]
1,317,254,400,000
[ [ "Schroedl", "S.", "" ] ]
1109.6030
M. Beetz
M. Beetz, H. Grosskreutz
Probabilistic Hybrid Action Models for Predicting Concurrent Percept-driven Robot Behavior
null
Journal Of Artificial Intelligence Research, Volume 24, pages 799-849, 2005
10.1613/jair.1565
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article develops Probabilistic Hybrid Action Models (PHAMs), a realistic causal model for predicting the behavior generated by modern percept-driven robot plans. PHAMs represent aspects of robot behavior that cannot be represented by most action models used in AI planning: the temporal structure of continuous control processes, their non-deterministic effects, several modes of their interferences, and the achievement of triggering conditions in closed-loop robot plans. The main contributions of this article are: (1) PHAMs, a model of concurrent percept-driven behavior, its formalization, and proofs that the model generates probably, qualitatively accurate predictions; and (2) a resource-efficient inference method for PHAMs based on sampling projections from probabilistic action models and state descriptions. We show how PHAMs can be applied to planning the course of action of an autonomous robot office courier based on analytical and experimental results.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 20:41:47 GMT" } ]
1,317,254,400,000
[ [ "Beetz", "M.", "" ], [ "Grosskreutz", "H.", "" ] ]
1109.6033
G. DeJong
G. DeJong, A. Epshteyn
Generative Prior Knowledge for Discriminative Classification
null
Journal Of Artificial Intelligence Research, Volume 27, pages 25-53, 2006
10.1613/jair.1934
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel framework for integrating prior knowledge into discriminative classifiers. Our framework allows discriminative classifiers such as Support Vector Machines (SVMs) to utilize prior knowledge specified in the generative setting. The dual objective of fitting the data and respecting prior knowledge is formulated as a bilevel program, which is solved (approximately) via iterative application of second-order cone programming. To test our approach, we consider the problem of using WordNet (a semantic database of English language) to improve low-sample classification accuracy of newsgroup categorization. WordNet is viewed as an approximate, but readily available source of background knowledge, and our framework is capable of utilizing it in a flexible way.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 20:53:29 GMT" } ]
1,317,254,400,000
[ [ "DeJong", "G.", "" ], [ "Epshteyn", "A.", "" ] ]
1109.6051
M. Helmert
M. Helmert
The Fast Downward Planning System
null
Journal Of Artificial Intelligence Research, Volume 26, pages 191-246, 2006
10.1613/jair.1705
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fast Downward is a classical planning system based on heuristic search. It can deal with general deterministic planning problems encoded in the propositional fragment of PDDL2.2, including advanced features like ADL conditions and effects and derived predicates (axioms). Like other well-known planners such as HSP and FF, Fast Downward is a progression planner, searching the space of world states of a planning task in the forward direction. However, unlike other PDDL planning systems, Fast Downward does not use the propositional PDDL representation of a planning task directly. Instead, the input is first translated into an alternative representation called multi-valued planning tasks, which makes many of the implicit constraints of a propositional planning task explicit. Exploiting this alternative representation, Fast Downward uses hierarchical decompositions of planning tasks for computing its heuristic function, called the causal graph heuristic, which is very different from traditional HSP-like heuristics based on ignoring negative interactions of operators. In this article, we give a full account of Fast Downwards approach to solving multi-valued planning tasks. We extend our earlier discussion of the causal graph heuristic to tasks involving axioms and conditional effects and present some novel techniques for search control that are used within Fast Downwards best-first search algorithm: preferred operators transfer the idea of helpful actions from local search to global best-first search, deferred evaluation of heuristic functions mitigates the negative effect of large branching factors on search performance, and multi-heuristic best-first search combines several heuristic evaluation functions within a single search algorithm in an orthogonal way. We also describe efficient data structures for fast state expansion (successor generators and axiom evaluators) and present a new non-heuristic search algorithm called focused iterative-broadening search, which utilizes the information encoded in causal graphs in a novel way. Fast Downward has proven remarkably successful: It won the "classical (i.e., propositional, non-optimising) track of the 4th International Planning Competition at ICAPS 2004, following in the footsteps of planners such as FF and LPG. Our experiments show that it also performs very well on the benchmarks of the earlier planning competitions and provide some insights about the usefulness of the new search enhancements.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 22:04:43 GMT" } ]
1,317,254,400,000
[ [ "Helmert", "M.", "" ] ]
1109.6052
V. R. Lesser
V. R. Lesser, R. Mailler
Asynchronous Partial Overlay: A New Algorithm for Solving Distributed Constraint Satisfaction Problems
null
Journal Of Artificial Intelligence Research, Volume 25, pages 529-576, 2006
10.1613/jair.1786
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed Constraint Satisfaction (DCSP) has long been considered an important problem in multi-agent systems research. This is because many real-world problems can be represented as constraint satisfaction and these problems often present themselves in a distributed form. In this article, we present a new complete, distributed algorithm called Asynchronous Partial Overlay (APO) for solving DCSPs that is based on a cooperative mediation process. The primary ideas behind this algorithm are that agents, when acting as a mediator, centralize small, relevant portions of the DCSP, that these centralized subproblems overlap, and that agents increase the size of their subproblems along critical paths within the DCSP as the problem solving unfolds. We present empirical evidence that shows that APO outperforms other known, complete DCSP techniques.
[ { "version": "v1", "created": "Tue, 27 Sep 2011 22:05:46 GMT" } ]
1,317,254,400,000
[ [ "Lesser", "V. R.", "" ], [ "Mailler", "R.", "" ] ]
1109.6344
R. Booth
R. Booth, T. Meyer
Admissible and Restrained Revision
null
Journal Of Artificial Intelligence Research, Volume 26, pages 127-151, 2006
10.1613/jair.1874
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As partial justification of their framework for iterated belief revision Darwiche and Pearl convincingly argued against Boutiliers natural revision and provided a prototypical revision operator that fits into their scheme. We show that the Darwiche-Pearl arguments lead naturally to the acceptance of a smaller class of operators which we refer to as admissible. Admissible revision ensures that the penultimate input is not ignored completely, thereby eliminating natural revision, but includes the Darwiche-Pearl operator, Nayaks lexicographic revision operator, and a newly introduced operator called restrained revision. We demonstrate that restrained revision is the most conservative of admissible revision operators, effecting as few changes as possible, while lexicographic revision is the least conservative, and point out that restrained revision can also be viewed as a composite operator, consisting of natural revision preceded by an application of a "backwards revision" operator previously studied by Papini. Finally, we propose the establishment of a principled approach for choosing an appropriate revision operator in different contexts and discuss future work.
[ { "version": "v1", "created": "Wed, 28 Sep 2011 20:26:49 GMT" } ]
1,317,340,800,000
[ [ "Booth", "R.", "" ], [ "Meyer", "T.", "" ] ]
1109.6345
R. I. Brafman
R. I. Brafman, C. Domshlak, S. E. Shimony
On Graphical Modeling of Preference and Importance
null
Journal Of Artificial Intelligence Research, Volume 25, pages 389-424, 2006
10.1613/jair.1895
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, CP-nets have emerged as a useful tool for supporting preference elicitation, reasoning, and representation. CP-nets capture and support reasoning with qualitative conditional preference statements, statements that are relatively natural for users to express. In this paper, we extend the CP-nets formalism to handle another class of very natural qualitative statements one often uses in expressing preferences in daily life - statements of relative importance of attributes. The resulting formalism, TCP-nets, maintains the spirit of CP-nets, in that it remains focused on using only simple and natural preference statements, uses the ceteris paribus semantics, and utilizes a graphical representation of this information to reason about its consistency and to perform, possibly constrained, optimization using it. The extra expressiveness it provides allows us to better model tradeoffs users would like to make, more faithfully representing their preferences.
[ { "version": "v1", "created": "Wed, 28 Sep 2011 20:29:41 GMT" } ]
1,317,340,800,000
[ [ "Brafman", "R. I.", "" ], [ "Domshlak", "C.", "" ], [ "Shimony", "S. E.", "" ] ]
1109.6346
M. Pistore
M. Pistore, M. Y. Vardi
The Planning Spectrum - One, Two, Three, Infinity
null
Journal Of Artificial Intelligence Research, Volume 30, pages 101-132, 2007
10.1613/jair.1909
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear Temporal Logic (LTL) is widely used for defining conditions on the execution paths of dynamic systems. In the case of dynamic systems that allow for nondeterministic evolutions, one has to specify, along with an LTL formula f, which are the paths that are required to satisfy the formula. Two extreme cases are the universal interpretation A.f, which requires that the formula be satisfied for all execution paths, and the existential interpretation E.f, which requires that the formula be satisfied for some execution path. When LTL is applied to the definition of goals in planning problems on nondeterministic domains, these two extreme cases are too restrictive. It is often impossible to develop plans that achieve the goal in all the nondeterministic evolutions of a system, and it is too weak to require that the goal is satisfied by some execution. In this paper we explore alternative interpretations of an LTL formula that are between these extreme cases. We define a new language that permits an arbitrary combination of the A and E quantifiers, thus allowing, for instance, to require that each finite execution can be extended to an execution satisfying an LTL formula (AE.f), or that there is some finite execution whose extensions all satisfy an LTL formula (EA.f). We show that only eight of these combinations of path quantifiers are relevant, corresponding to an alternation of the quantifiers of length one (A and E), two (AE and EA), three (AEA and EAE), and infinity ((AE)* and (EA)*). We also present a planning algorithm for the new language that is based on an automata-theoretic approach, and study its complexity.
[ { "version": "v1", "created": "Wed, 28 Sep 2011 20:35:31 GMT" } ]
1,317,340,800,000
[ [ "Pistore", "M.", "" ], [ "Vardi", "M. Y.", "" ] ]
1109.6348
A. Roy
A. Roy
Fault Tolerant Boolean Satisfiability
null
Journal Of Artificial Intelligence Research, Volume 25, pages 503-527, 2006
10.1613/jair.1914
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A delta-model is a satisfying assignment of a Boolean formula for which any small alteration, such as a single bit flip, can be repaired by flips to some small number of other bits, yielding a new satisfying assignment. These satisfying assignments represent robust solutions to optimization problems (e.g., scheduling) where it is possible to recover from unforeseen events (e.g., a resource becoming unavailable). The concept of delta-models was introduced by Ginsberg, Parkes and Roy (AAAI 1998), where it was proved that finding delta-models for general Boolean formulas is NP-complete. In this paper, we extend that result by studying the complexity of finding delta-models for classes of Boolean formulas which are known to have polynomial time satisfiability solvers. In particular, we examine 2-SAT, Horn-SAT, Affine-SAT, dual-Horn-SAT, 0-valid and 1-valid SAT. We see a wide variation in the complexity of finding delta-models, e.g., while 2-SAT and Affine-SAT have polynomial time tests for delta-models, testing whether a Horn-SAT formula has one is NP-complete.
[ { "version": "v1", "created": "Wed, 28 Sep 2011 20:37:53 GMT" } ]
1,317,340,800,000
[ [ "Roy", "A.", "" ] ]
1109.6361
J. Y. Chai
J. Y. Chai, Z. Prasov, S. Qu
Cognitive Principles in Robust Multimodal Interpretation
null
Journal Of Artificial Intelligence Research, Volume 27, pages 55-83, 2006
10.1613/jair.1936
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal conversational interfaces provide a natural means for users to communicate with computer systems through multiple modalities such as speech and gesture. To build effective multimodal interfaces, automated interpretation of user multimodal inputs is important. Inspired by the previous investigation on cognitive status in multimodal human machine interaction, we have developed a greedy algorithm for interpreting user referring expressions (i.e., multimodal reference resolution). This algorithm incorporates the cognitive principles of Conversational Implicature and Givenness Hierarchy and applies constraints from various sources (e.g., temporal, semantic, and contextual) to resolve references. Our empirical results have shown the advantage of this algorithm in efficiently resolving a variety of user references. Because of its simplicity and generality, this approach has the potential to improve the robustness of multimodal input interpretation.
[ { "version": "v1", "created": "Wed, 28 Sep 2011 21:45:34 GMT" } ]
1,317,340,800,000
[ [ "Chai", "J. Y.", "" ], [ "Prasov", "Z.", "" ], [ "Qu", "S.", "" ] ]
1109.6618
D. Davidov
D. Davidov, S. Markovitch
Multiple-Goal Heuristic Search
null
Journal Of Artificial Intelligence Research, Volume 26, pages 417-451, 2006
10.1613/jair.1940
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new framework for anytime heuristic search where the task is to achieve as many goals as possible within the allocated resources. We show the inadequacy of traditional distance-estimation heuristics for tasks of this type and present alternative heuristics that are more appropriate for multiple-goal search. In particular, we introduce the marginal-utility heuristic, which estimates the cost and the benefit of exploring a subtree below a search node. We developed two methods for online learning of the marginal-utility heuristic. One is based on local similarity of the partial marginal utility of sibling nodes, and the other generalizes marginal-utility over the state feature space. We apply our adaptive and non-adaptive multiple-goal search algorithms to several problems, including focused crawling, and show their superiority over existing methods.
[ { "version": "v1", "created": "Thu, 29 Sep 2011 18:50:13 GMT" } ]
1,426,723,200,000
[ [ "Davidov", "D.", "" ], [ "Markovitch", "S.", "" ] ]
1109.6621
S. Hoelldobler
S. Hoelldobler, E. Karabaev, O. Skvortsova
FluCaP: A Heuristic Search Planner for First-Order MDPs
null
Journal Of Artificial Intelligence Research, Volume 27, pages 419-439, 2006
10.1613/jair.1965
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a heuristic search algorithm for solving first-order Markov Decision Processes (FOMDPs). Our approach combines first-order state abstraction that avoids evaluating states individually, and heuristic search that avoids evaluating all states. Firstly, in contrast to existing systems, which start with propositionalizing the FOMDP and then perform state abstraction on its propositionalized version we apply state abstraction directly on the FOMDP avoiding propositionalization. This kind of abstraction is referred to as first-order state abstraction. Secondly, guided by an admissible heuristic, the search is restricted to those states that are reachable from the initial state. We demonstrate the usefulness of the above techniques for solving FOMDPs with a system, referred to as FluCaP (formerly, FCPlanner), that entered the probabilistic track of the 2004 International Planning Competition (IPC2004) and demonstrated an advantage over other planners on the problems represented in first-order terms.
[ { "version": "v1", "created": "Thu, 29 Sep 2011 18:58:54 GMT" } ]
1,317,340,800,000
[ [ "Hoelldobler", "S.", "" ], [ "Karabaev", "E.", "" ], [ "Skvortsova", "O.", "" ] ]
1109.6841
Percy Liang
Percy Liang and Michael I. Jordan and Dan Klein
Learning Dependency-Based Compositional Semantics
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Suppose we want to build a system that answers a natural language question by representing its semantics as a logical form and computing the answer given a structured database of facts. The core part of such a system is the semantic parser that maps questions to logical forms. Semantic parsers are typically trained from examples of questions annotated with their target logical forms, but this type of annotation is expensive. Our goal is to learn a semantic parser from question-answer pairs instead, where the logical form is modeled as a latent variable. Motivated by this challenging learning problem, we develop a new semantic formalism, dependency-based compositional semantics (DCS), which has favorable linguistic, statistical, and computational properties. We define a log-linear distribution over DCS logical forms and estimate the parameters using a simple procedure that alternates between beam search and numerical optimization. On two standard semantic parsing benchmarks, our system outperforms all existing state-of-the-art systems, despite using no annotated logical forms.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 14:49:30 GMT" } ]
1,317,600,000,000
[ [ "Liang", "Percy", "" ], [ "Jordan", "Michael I.", "" ], [ "Klein", "Dan", "" ] ]
1110.0020
A. C. Cem Say
\"O. Y{\i}lmaz, A. C. C. Say
Causes of Ineradicable Spurious Predictions in Qualitative Simulation
null
Journal Of Artificial Intelligence Research, Volume 27, pages 551-575, 2006
10.1613/jair.2065
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It was recently proved that a sound and complete qualitative simulator does not exist, that is, as long as the input-output vocabulary of the state-of-the-art QSIM algorithm is used, there will always be input models which cause any simulator with a coverage guarantee to make spurious predictions in its output. In this paper, we examine whether a meaningfully expressive restriction of this vocabulary is possible so that one can build a simulator with both the soundness and completeness properties. We prove several negative results: All sound qualitative simulators, employing subsets of the QSIM representation which retain the operating region transition feature, and support at least the addition and constancy constraints, are shown to be inherently incomplete. Even when the simulations are restricted to run in a single operating region, a constraint vocabulary containing just the addition, constancy, derivative, and multiplication relations makes the construction of sound and complete qualitative simulators impossible.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:44:00 GMT" } ]
1,321,833,600,000
[ [ "Yılmaz", "Ö.", "" ], [ "Say", "A. C. C.", "" ] ]
1110.0023
L. Liu
L. Liu, M. Truszczynski
Properties and Applications of Programs with Monotone and Convex Constraints
null
Journal Of Artificial Intelligence Research, Volume 27, pages 299-334, 2006
10.1613/jair.2009
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study properties of programs with monotone and convex constraints. We extend to these formalisms concepts and results from normal logic programming. They include the notions of strong and uniform equivalence with their characterizations, tight programs and Fages Lemma, program completion and loop formulas. Our results provide an abstract account of properties of some recent extensions of logic programming with aggregates, especially the formalism of lparse programs. They imply a method to compute stable models of lparse programs by means of off-the-shelf solvers of pseudo-boolean constraints, which is often much faster than the smodels system.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:51:03 GMT" } ]
1,317,686,400,000
[ [ "Liu", "L.", "" ], [ "Truszczynski", "M.", "" ] ]
1110.0024
S. F. Smith
S. F. Smith, M. J. Streeter
How the Landscape of Random Job Shop Scheduling Instances Depends on the Ratio of Jobs to Machines
null
Journal Of Artificial Intelligence Research, Volume 26, pages 247-287, 2006
10.1613/jair.2013
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We characterize the search landscape of random instances of the job shop scheduling problem (JSP). Specifically, we investigate how the expected values of (1) backbone size, (2) distance between near-optimal schedules, and (3) makespan of random schedules vary as a function of the job to machine ratio (N/M). For the limiting cases N/M approaches 0 and N/M approaches infinity we provide analytical results, while for intermediate values of N/M we perform experiments. We prove that as N/M approaches 0, backbone size approaches 100%, while as N/M approaches infinity the backbone vanishes. In the process we show that as N/M approaches 0 (resp. N/M approaches infinity), simple priority rules almost surely generate an optimal schedule, providing theoretical evidence of an "easy-hard-easy" pattern of typical-case instance difficulty in job shop scheduling. We also draw connections between our theoretical results and the "big valley" picture of JSP landscapes.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:51:28 GMT" } ]
1,317,686,400,000
[ [ "Smith", "S. F.", "" ], [ "Streeter", "M. J.", "" ] ]
1110.0026
B. Faltings
B. Faltings, P. Pu, P. Viappiani
Preference-based Search using Example-Critiquing with Suggestions
null
Journal Of Artificial Intelligence Research, Volume 27, pages 465-503, 2006
10.1613/jair.2075
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider interactive tools that help users search for their most preferred item in a large collection of options. In particular, we examine example-critiquing, a technique for enabling users to incrementally construct preference models by critiquing example options that are presented to them. We present novel techniques for improving the example-critiquing technology by adding suggestions to its displayed options. Such suggestions are calculated based on an analysis of users current preference model and their potential hidden preferences. We evaluate the performance of our model-based suggestion techniques with both synthetic and real users. Results show that such suggestions are highly attractive to users and can stimulate them to express more preferences to improve the chance of identifying their most preferred item by up to 78%.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:55:50 GMT" } ]
1,317,686,400,000
[ [ "Faltings", "B.", "" ], [ "Pu", "P.", "" ], [ "Viappiani", "P.", "" ] ]
1110.0027
Daniel Bryce
J. Pineau, G. Gordon, S. Thrun
Anytime Point-Based Approximations for Large POMDPs
null
Journal Of Artificial Intelligence Research, Volume 27, pages 335-380, 2006
10.1613/jair.2078
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Partially Observable Markov Decision Process has long been recognized as a rich framework for real-world planning and control problems, especially in robotics. However exact solutions in this framework are typically computationally intractable for all but the smallest problems. A well-known technique for speeding up POMDP solving involves performing value backups at specific belief points, rather than over the entire belief simplex. The efficiency of this approach, however, depends greatly on the selection of points. This paper presents a set of novel techniques for selecting informative belief points which work well in practice. The point selection procedure is combined with point-based value backups to form an effective anytime POMDP algorithm called Point-Based Value Iteration (PBVI). The first aim of this paper is to introduce this algorithm and present a theoretical analysis justifying the choice of belief selection technique. The second aim of this paper is to provide a thorough empirical comparison between PBVI and other state-of-the-art POMDP methods, in particular the Perseus algorithm, in an effort to highlight their similarities and differences. Evaluation is performed using both standard POMDP domains and realistic robotic tasks.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:56:49 GMT" }, { "version": "v2", "created": "Tue, 4 Oct 2011 15:08:13 GMT" } ]
1,317,772,800,000
[ [ "Pineau", "J.", "" ], [ "Gordon", "G.", "" ], [ "Thrun", "S.", "" ] ]
1110.0028
C. Guestrin
C. Guestrin, M. Hauskrecht, B. Kveton
Solving Factored MDPs with Hybrid State and Action Variables
null
Journal Of Artificial Intelligence Research, Volume 27, pages 153-201, 2006
10.1613/jair.2085
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Efficient representations and solutions for large decision problems with continuous and discrete variables are among the most important challenges faced by the designers of automated decision support systems. In this paper, we describe a novel hybrid factored Markov decision process (MDP) model that allows for a compact representation of these problems, and a new hybrid approximate linear programming (HALP) framework that permits their efficient solutions. The central idea of HALP is to approximate the optimal value function by a linear combination of basis functions and optimize its weights by linear programming. We analyze both theoretical and computational aspects of this approach, and demonstrate its scale-up potential on several hybrid optimization problems.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:57:35 GMT" } ]
1,317,686,400,000
[ [ "Guestrin", "C.", "" ], [ "Hauskrecht", "M.", "" ], [ "Kveton", "B.", "" ] ]
1110.0029
Xavier Carreras
M. Surdeanu, L. Marquez, X. Carreras, P. R. Comas
Combination Strategies for Semantic Role Labeling
null
Journal Of Artificial Intelligence Research, Volume 29, pages 105-151, 2007
10.1613/jair.2088
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces and analyzes a battery of inference models for the problem of semantic role labeling: one based on constraint satisfaction, and several strategies that model the inference as a meta-learning problem using discriminative classifiers. These classifiers are developed with a rich set of novel features that encode proposition and sentence-level information. To our knowledge, this is the first work that: (a) performs a thorough analysis of learning-based inference models for semantic role labeling, and (b) compares several inference strategies in this context. We evaluate the proposed inference strategies in the framework of the CoNLL-2005 shared task using only automatically-generated syntactic information. The extensive experimental evaluation and analysis indicates that all the proposed inference strategies are successful -they all outperform the current best results reported in the CoNLL-2005 evaluation exercise- but each of the proposed approaches has its advantages and disadvantages. Several important traits of a state-of-the-art SRL combination strategy emerge from this analysis: (i) individual models should be combined at the granularity of candidate arguments rather than at the granularity of complete solutions; (ii) the best combination strategy uses an inference model based in learning; and (iii) the learning-based inference benefits from max-margin classifiers and global feedback.
[ { "version": "v1", "created": "Fri, 30 Sep 2011 20:58:00 GMT" }, { "version": "v2", "created": "Tue, 4 Oct 2011 17:05:51 GMT" } ]
1,426,723,200,000
[ [ "Surdeanu", "M.", "" ], [ "Marquez", "L.", "" ], [ "Carreras", "X.", "" ], [ "Comas", "P. R.", "" ] ]
1110.0248
Yongzhi Cao
Yongzhi Cao, Huaiqing Wang, Sherry X. Sun, and Guoqing Chen
A Behavioral Distance for Fuzzy-Transition Systems
12 double column pages
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
In contrast to the existing approaches to bisimulation for fuzzy systems, we introduce a behavioral distance to measure the behavioral similarity of states in a nondeterministic fuzzy-transition system. This behavioral distance is defined as the greatest fixed point of a suitable monotonic function and provides a quantitative analogue of bisimilarity. The behavioral distance has the important property that two states are at zero distance if and only if they are bisimilar. Moreover, for any given threshold, we find that states with behavioral distances bounded by the threshold are equivalent. In addition, we show that two system combinators---parallel composition and product---are non-expansive with respect to our behavioral distance, which makes compositional verification possible.
[ { "version": "v1", "created": "Mon, 3 Oct 2011 00:32:22 GMT" } ]
1,426,723,200,000
[ [ "Cao", "Yongzhi", "" ], [ "Wang", "Huaiqing", "" ], [ "Sun", "Sherry X.", "" ], [ "Chen", "Guoqing", "" ] ]
1110.1016
S. Edelkamp
S. Edelkamp, R. Englert, J. Hoffmann, F. Liporace, S. Thiebaux, S. Trueg
Engineering Benchmarks for Planning: the Domains Used in the Deterministic Part of IPC-4
null
Journal Of Artificial Intelligence Research, Volume 26, pages 453-541, 2006
10.1613/jair.1982
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a field of research about general reasoning mechanisms, it is essential to have appropriate benchmarks. Ideally, the benchmarks should reflect possible applications of the developed technology. In AI Planning, researchers more and more tend to draw their testing examples from the benchmark collections used in the International Planning Competition (IPC). In the organization of (the deterministic part of) the fourth IPC, IPC-4, the authors therefore invested significant effort to create a useful set of benchmarks. They come from five different (potential) real-world applications of planning: airport ground traffic control, oil derivative transportation in pipeline networks, model-checking safety properties, power supply restoration, and UMTS call setup. Adapting and preparing such an application for use as a benchmark in the IPC involves, at the time, inevitable (often drastic) simplifications, as well as careful choice between, and engineering of, domain encodings. For the first time in the IPC, we used compilations to formulate complex domain features in simple languages such as STRIPS, rather than just dropping the more interesting problem constraints in the simpler language subsets. The article explains and discusses the five application domains and their adaptation to form the PDDL test suites used in IPC-4. We summarize known theoretical results on structural properties of the domains, regarding their computational complexity and provable properties of their topology under the h+ function (an idealized version of the relaxed plan heuristic). We present new (empirical) results illuminating properties such as the quality of the most wide-spread heuristic functions (planning graph, serial planning graph, and relaxed plan), the growth of propositional representations over instance size, and the number of actions available to achieve each fact; we discuss these data in conjunction with the best results achieved by the different kinds of planners participating in IPC-4.
[ { "version": "v1", "created": "Thu, 29 Sep 2011 19:02:41 GMT" } ]
1,317,859,200,000
[ [ "Edelkamp", "S.", "" ], [ "Englert", "R.", "" ], [ "Hoffmann", "J.", "" ], [ "Liporace", "F.", "" ], [ "Thiebaux", "S.", "" ], [ "Trueg", "S.", "" ] ]
1110.2200
M. Fox
M. Fox, D. Long
Modelling Mixed Discrete-Continuous Domains for Planning
null
Journal Of Artificial Intelligence Research, Volume 27, pages 235-297, 2006
10.1613/jair.2044
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present pddl+, a planning domain description language for modelling mixed discrete-continuous planning domains. We describe the syntax and modelling style of pddl+, showing that the language makes convenient the modelling of complex time-dependent effects. We provide a formal semantics for pddl+ by mapping planning instances into constructs of hybrid automata. Using the syntax of HAs as our semantic model we construct a semantic mapping to labelled transition systems to complete the formal interpretation of pddl+ planning instances. An advantage of building a mapping from pddl+ to HA theory is that it forms a bridge between the Planning and Real Time Systems research communities. One consequence is that we can expect to make use of some of the theoretical properties of HAs. For example, for a restricted class of HAs the Reachability problem (which is equivalent to Plan Existence) is decidable. pddl+ provides an alternative to the continuous durative action model of pddl2.1, adding a more flexible and robust model of time-dependent behaviour.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 21:16:30 GMT" } ]
1,318,377,600,000
[ [ "Fox", "M.", "" ], [ "Long", "D.", "" ] ]
1110.2203
R. H. C. Yap
R. H. C. Yap, Y. Zhang
Set Intersection and Consistency in Constraint Networks
null
Journal Of Artificial Intelligence Research, Volume 27, pages 441-464, 2006
10.1613/jair.2058
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show that there is a close relation between consistency in a constraint network and set intersection. A proof schema is provided as a generic way to obtain consistency properties from properties on set intersection. This approach not only simplifies the understanding of and unifies many existing consistency results, but also directs the study of consistency to that of set intersection properties in many situations, as demonstrated by the results on the convexity and tightness of constraints in this paper. Specifically, we identify a new class of tree convex constraints where local consistency ensures global consistency. This generalizes row convex constraints. Various consistency results are also obtained on constraint networks where only some, in contrast to all in the existing work,constraints are tight.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 21:27:27 GMT" } ]
1,318,377,600,000
[ [ "Yap", "R. H. C.", "" ], [ "Zhang", "Y.", "" ] ]
1110.2204
J. Culberson
J. Culberson, Y. Gao
Consistency and Random Constraint Satisfaction Models
null
Journal Of Artificial Intelligence Research, Volume 28, pages 517-557, 2007
10.1613/jair.2155
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the possibility of designing non-trivial random CSP models by exploiting the intrinsic connection between structures and typical-case hardness. We show that constraint consistency, a notion that has been developed to improve the efficiency of CSP algorithms, is in fact the key to the design of random CSP models that have interesting phase transition behavior and guaranteed exponential resolution complexity without putting much restriction on the parameter of constraint tightness or the domain size of the problem. We propose a very flexible framework for constructing problem instances withinteresting behavior and develop a variety of concrete methods to construct specific random CSP models that enforce different levels of constraint consistency. A series of experimental studies with interesting observations are carried out to illustrate the effectiveness of introducing structural elements in random instances, to verify the robustness of our proposal, and to investigate features of some specific models based on our framework that are highly related to the behavior of backtracking search algorithms.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 21:35:30 GMT" } ]
1,318,377,600,000
[ [ "Culberson", "J.", "" ], [ "Gao", "Y.", "" ] ]
1110.2205
E. Pontelli
E. Pontelli, T. C. Son, P. H. Tu
Answer Sets for Logic Programs with Arbitrary Abstract Constraint Atoms
null
Journal Of Artificial Intelligence Research, Volume 29, pages 353-389, 2007
10.1613/jair.2171
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present two alternative approaches to defining answer sets for logic programs with arbitrary types of abstract constraint atoms (c-atoms). These approaches generalize the fixpoint-based and the level mapping based answer set semantics of normal logic programs to the case of logic programs with arbitrary types of c-atoms. The results are four different answer set definitions which are equivalent when applied to normal logic programs. The standard fixpoint-based semantics of logic programs is generalized in two directions, called answer set by reduct and answer set by complement. These definitions, which differ from each other in the treatment of negation-as-failure (naf) atoms, make use of an immediate consequence operator to perform answer set checking, whose definition relies on the notion of conditional satisfaction of c-atoms w.r.t. a pair of interpretations. The other two definitions, called strongly and weakly well-supported models, are generalizations of the notion of well-supported models of normal logic programs to the case of programs with c-atoms. As for the case of fixpoint-based semantics, the difference between these two definitions is rooted in the treatment of naf atoms. We prove that answer sets by reduct (resp. by complement) are equivalent to weakly (resp. strongly) well-supported models of a program, thus generalizing the theorem on the correspondence between stable models and well-supported models of a normal logic program to the class of programs with c-atoms. We show that the newly defined semantics coincide with previously introduced semantics for logic programs with monotone c-atoms, and they extend the original answer set semantics of normal logic programs. We also study some properties of answer sets of programs with c-atoms, and relate our definitions to several semantics for logic programs with aggregates presented in the literature.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 21:36:17 GMT" } ]
1,318,377,600,000
[ [ "Pontelli", "E.", "" ], [ "Son", "T. C.", "" ], [ "Tu", "P. H.", "" ] ]
1110.2209
A. S. Fukunaga
A. S. Fukunaga, R. E. Korf
Bin Completion Algorithms for Multicontainer Packing, Knapsack, and Covering Problems
null
Journal Of Artificial Intelligence Research, Volume 28, pages 393-429, 2007
10.1613/jair.2106
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many combinatorial optimization problems such as the bin packing and multiple knapsack problems involve assigning a set of discrete objects to multiple containers. These problems can be used to model task and resource allocation problems in multi-agent systems and distributed systms, and can also be found as subproblems of scheduling problems. We propose bin completion, a branch-and-bound strategy for one-dimensional, multicontainer packing problems. Bin completion combines a bin-oriented search space with a powerful dominance criterion that enables us to prune much of the space. The performance of the basic bin completion framework can be enhanced by using a number of extensions, including nogood-based pruning techniques that allow further exploitation of the dominance criterion. Bin completion is applied to four problems: multiple knapsack, bin covering, min-cost covering, and bin packing. We show that our bin completion algorithms yield new, state-of-the-art results for the multiple knapsack, bin covering, and min-cost covering problems, outperforming previous algorithms by several orders of magnitude with respect to runtime on some classes of hard, random problem instances. For the bin packing problem, we demonstrate significant improvements compared to most previous results, but show that bin completion is not competitive with current state-of-the-art cutting-stock based approaches.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 21:55:37 GMT" } ]
1,318,377,600,000
[ [ "Fukunaga", "A. S.", "" ], [ "Korf", "R. E.", "" ] ]
1110.2212
F. Rossi
F. Rossi, K. B. Venable, N. Yorke-Smith
Uncertainty in Soft Temporal Constraint Problems:A General Framework and Controllability Algorithms for the Fuzzy Case
null
Journal Of Artificial Intelligence Research, Volume 27, pages 617-674, 2006
10.1613/jair.2135
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In real-life temporal scenarios, uncertainty and preferences are often essential and coexisting aspects. We present a formalism where quantitative temporal constraints with both preferences and uncertainty can be defined. We show how three classical notions of controllability (that is, strong, weak, and dynamic), which have been developed for uncertain temporal problems, can be generalized to handle preferences as well. After defining this general framework, we focus on problems where preferences follow the fuzzy approach, and with properties that assure tractability. For such problems, we propose algorithms to check the presence of the controllability properties. In particular, we show that in such a setting dealing simultaneously with preferences and uncertainty does not increase the complexity of controllability testing. We also develop a dynamic execution algorithm, of polynomial complexity, that produces temporal plans under uncertainty that are optimal with respect to fuzzy preferences.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 22:02:16 GMT" } ]
1,618,185,600,000
[ [ "Rossi", "F.", "" ], [ "Venable", "K. B.", "" ], [ "Yorke-Smith", "N.", "" ] ]
1110.2213
C. Bettini
C. Bettini, S. Mascetti, X. S. Wang
Supporting Temporal Reasoning by Mapping Calendar Expressions to Minimal Periodic Sets
null
Journal Of Artificial Intelligence Research, Volume 28, pages 299-348, 2007
10.1613/jair.2136
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the recent years several research efforts have focused on the concept of time granularity and its applications. A first stream of research investigated the mathematical models behind the notion of granularity and the algorithms to manage temporal data based on those models. A second stream of research investigated symbolic formalisms providing a set of algebraic operators to define granularities in a compact and compositional way. However, only very limited manipulation algorithms have been proposed to operate directly on the algebraic representation making it unsuitable to use the symbolic formalisms in applications that need manipulation of granularities. This paper aims at filling the gap between the results from these two streams of research, by providing an efficient conversion from the algebraic representation to the equivalent low-level representation based on the mathematical models. In addition, the conversion returns a minimal representation in terms of period length. Our results have a major practical impact: users can more easily define arbitrary granularities in terms of algebraic operators, and then access granularity reasoning and other services operating efficiently on the equivalent, minimal low-level representation. As an example, we illustrate the application to temporal constraint reasoning with multiple granularities. From a technical point of view, we propose an hybrid algorithm that interleaves the conversion of calendar subexpressions into periodical sets with the minimization of the period length. The algorithm returns set-based granularity representations having minimal period length, which is the most relevant parameter for the performance of the considered reasoning services. Extensive experimental work supports the techniques used in the algorithm, and shows the efficiency and effectiveness of the algorithm.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 22:03:55 GMT" } ]
1,318,377,600,000
[ [ "Bettini", "C.", "" ], [ "Mascetti", "S.", "" ], [ "Wang", "X. S.", "" ] ]
1110.2216
P. F. Felzenszwalb
P. F. Felzenszwalb, D. McAllester
The Generalized A* Architecture
null
Journal Of Artificial Intelligence Research, Volume 29, pages 153-190, 2007
10.1613/jair.2187
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of computing a lightest derivation of a global structure using a set of weighted rules. A large variety of inference problems in AI can be formulated in this framework. We generalize A* search and heuristics derived from abstractions to a broad class of lightest derivation problems. We also describe a new algorithm that searches for lightest derivations using a hierarchy of abstractions. Our generalization of A* gives a new algorithm for searching AND/OR graphs in a bottom-up fashion. We discuss how the algorithms described here provide a general architecture for addressing the pipeline problem --- the problem of passing information back and forth between various stages of processing in a perceptual system. We consider examples in computer vision and natural language processing. We apply the hierarchical search algorithm to the problem of estimating the boundaries of convex objects in grayscale images and compare it to other search methods. A second set of experiments demonstrate the use of a new compositional model for finding salient curves in images.
[ { "version": "v1", "created": "Mon, 10 Oct 2011 22:14:15 GMT" } ]
1,318,377,600,000
[ [ "Felzenszwalb", "P. F.", "" ], [ "McAllester", "D.", "" ] ]
1110.2726
D. Gabelaia
D. Gabelaia, R. Kontchakov, A. Kurucz, F. Wolter, M. Zakharyaschev
Combining Spatial and Temporal Logics: Expressiveness vs. Complexity
null
Journal Of Artificial Intelligence Research, Volume 23, pages 167-243, 2005
10.1613/jair.1537
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we construct and investigate a hierarchy of spatio-temporal formalisms that result from various combinations of propositional spatial and temporal logics such as the propositional temporal logic PTL, the spatial logics RCC-8, BRCC-8, S4u and their fragments. The obtained results give a clear picture of the trade-off between expressiveness and computational realisability within the hierarchy. We demonstrate how different combining principles as well as spatial and temporal primitives can produce NP-, PSPACE-, EXPSPACE-, 2EXPSPACE-complete, and even undecidable spatio-temporal logics out of components that are at most NP- or PSPACE-complete.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:17:41 GMT" } ]
1,318,464,000,000
[ [ "Gabelaia", "D.", "" ], [ "Kontchakov", "R.", "" ], [ "Kurucz", "A.", "" ], [ "Wolter", "F.", "" ], [ "Zakharyaschev", "M.", "" ] ]
1110.2728
A. Gerevini
A. Gerevini, A. Saetti, I. Serina
An Approach to Temporal Planning and Scheduling in Domains with Predictable Exogenous Events
null
Journal Of Artificial Intelligence Research, Volume 25, pages 187-231, 2006
10.1613/jair.1742
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The treatment of exogenous events in planning is practically important in many real-world domains where the preconditions of certain plan actions are affected by such events. In this paper we focus on planning in temporal domains with exogenous events that happen at known times, imposing the constraint that certain actions in the plan must be executed during some predefined time windows. When actions have durations, handling such temporal constraints adds an extra difficulty to planning. We propose an approach to planning in these domains which integrates constraint-based temporal reasoning into a graph-based planning framework using local search. Our techniques are implemented in a planner that took part in the 4th International Planning Competition (IPC-4). A statistical analysis of the results of IPC-4 demonstrates the effectiveness of our approach in terms of both CPU-time and plan quality. Additional experiments show the good performance of the temporal reasoning techniques integrated into our planner.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:18:04 GMT" } ]
1,318,464,000,000
[ [ "Gerevini", "A.", "" ], [ "Saetti", "A.", "" ], [ "Serina", "I.", "" ] ]
1110.2729
F. Bacchus
F. Bacchus
The Power of Modeling - a Response to PDDL2.1
null
Journal Of Artificial Intelligence Research, Volume 20, pages 125-132, 2003
10.1613/jair.1993
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this commentary I argue that although PDDL is a very useful standard for the planning competition, its design does not properly consider the issue of domain modeling. Hence, I would not advocate its use in specifying planning domains outside of the context of the planning competition. Rather, the field needs to explore different approaches and grapple more directly with the problem of effectively modeling and utilizing all of the diverse pieces of knowledge we typically have about planning domains.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:18:48 GMT" } ]
1,318,464,000,000
[ [ "Bacchus", "F.", "" ] ]
1110.2730
M. S. Boddy
M. S. Boddy
Imperfect Match: PDDL 2.1 and Real Applications
null
Journal Of Artificial Intelligence Research, Volume 20, pages 133-137, 2003
10.1613/jair.1994
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
PDDL was originally conceived and constructed as a lingua franca for the International Planning Competition. PDDL2.1 embodies a set of extensions intended to support the expression of something closer to real planning problems. This objective has only been partially achieved, due in large part to a deliberate focus on not moving too far from classical planning models and solution methods.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:19:10 GMT" } ]
1,318,464,000,000
[ [ "Boddy", "M. S.", "" ] ]
1110.2731
H. A. Geffner
H. A. Geffner
PDDL 2.1: Representation vs. Computation
null
Journal Of Artificial Intelligence Research, Volume 20, pages 139-144, 2003
10.1613/jair.1995
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I comment on the PDDL 2.1 language and its use in the planning competition, focusing on the choices made for accommodating time and concurrency. I also discuss some methodological issues that have to do with the move toward more expressive planning languages and the balance needed in planning research between semantics and computation.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:19:24 GMT" } ]
1,318,464,000,000
[ [ "Geffner", "H. A.", "" ] ]
1110.2732
J. C. Beck
J. C. Beck, N. Wilson
Proactive Algorithms for Job Shop Scheduling with Probabilistic Durations
null
Journal Of Artificial Intelligence Research, Volume 28, pages 183-232, 2007
10.1613/jair.2080
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most classical scheduling formulations assume a fixed and known duration for each activity. In this paper, we weaken this assumption, requiring instead that each duration can be represented by an independent random variable with a known mean and variance. The best solutions are ones which have a high probability of achieving a good makespan. We first create a theoretical framework, formally showing how Monte Carlo simulation can be combined with deterministic scheduling algorithms to solve this problem. We propose an associated deterministic scheduling problem whose solution is proved, under certain conditions, to be a lower bound for the probabilistic problem. We then propose and investigate a number of techniques for solving such problems based on combinations of Monte Carlo simulation, solutions to the associated deterministic problem, and either constraint programming or tabu search. Our empirical results demonstrate that a combination of the use of the associated deterministic problem and Monte Carlo simulation results in algorithms that scale best both in terms of problem size and uncertainty. Further experiments point to the correlation between the quality of the deterministic solution and the quality of the probabilistic solution as a major factor responsible for this success.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:19:43 GMT" } ]
1,318,464,000,000
[ [ "Beck", "J. C.", "" ], [ "Wilson", "N.", "" ] ]
1110.2734
A. Darwiche
A. Darwiche, J. Huang
The Language of Search
null
Journal Of Artificial Intelligence Research, Volume 29, pages 191-219, 2007
10.1613/jair.2097
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with a class of algorithms that perform exhaustive search on propositional knowledge bases. We show that each of these algorithms defines and generates a propositional language. Specifically, we show that the trace of a search can be interpreted as a combinational circuit, and a search algorithm then defines a propositional language consisting of circuits that are generated across all possible executions of the algorithm. In particular, we show that several versions of exhaustive DPLL search correspond to such well-known languages as FBDD, OBDD, and a precisely-defined subset of d-DNNF. By thus mapping search algorithms to propositional languages, we provide a uniform and practical framework in which successful search techniques can be harnessed for compilation of knowledge into various languages of interest, and a new methodology whereby the power and limitations of search algorithms can be understood by looking up the tractability and succinctness of the corresponding propositional languages.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:20:23 GMT" } ]
1,318,464,000,000
[ [ "Darwiche", "A.", "" ], [ "Huang", "J.", "" ] ]
1110.2735
L. Barbulescu
L. Barbulescu, A. E. Howe, M. Roberts, L. D. Whitley
Understanding Algorithm Performance on an Oversubscribed Scheduling Application
null
Journal Of Artificial Intelligence Research, Volume 27, pages 577-615, 2006
10.1613/jair.2038
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The best performing algorithms for a particular oversubscribed scheduling application, Air Force Satellite Control Network (AFSCN) scheduling, appear to have little in common. Yet, through careful experimentation and modeling of performance in real problem instances, we can relate characteristics of the best algorithms to characteristics of the application. In particular, we find that plateaus dominate the search spaces (thus favoring algorithms that make larger changes to solutions) and that some randomization in exploration is critical to good performance (due to the lack of gradient information on the plateaus). Based on our explanations of algorithm performance, we develop a new algorithm that combines characteristics of the best performers; the new algorithms performance is better than the previous best. We show how hypothesis driven experimentation and search modeling can both explain algorithm performance and motivate the design of a new algorithm.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:21:07 GMT" } ]
1,318,464,000,000
[ [ "Barbulescu", "L.", "" ], [ "Howe", "A. E.", "" ], [ "Roberts", "M.", "" ], [ "Whitley", "L. D.", "" ] ]
1110.2736
A. I. Coles
A. I. Coles, A. J. Smith
Marvin: A Heuristic Search Planner with Online Macro-Action Learning
null
Journal Of Artificial Intelligence Research, Volume 28, pages 119-156, 2007
10.1613/jair.2077
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes Marvin, a planner that competed in the Fourth International Planning Competition (IPC 4). Marvin uses action-sequence-memoisation techniques to generate macro-actions, which are then used during search for a solution plan. We provide an overview of its architecture and search behaviour, detailing the algorithms used. We also empirically demonstrate the effectiveness of its features in various planning domains; in particular, the effects on performance due to the use of macro-actions, the novel features of its search behaviour, and the native support of ADL and Derived Predicates.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:21:41 GMT" } ]
1,318,464,000,000
[ [ "Coles", "A. I.", "" ], [ "Smith", "A. J.", "" ] ]
1110.2737
E. A. Hansen
E. A. Hansen, R. Zhou
Anytime Heuristic Search
null
Journal Of Artificial Intelligence Research, Volume 28, pages 267-297, 2007
10.1613/jair.2096
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We describe how to convert the heuristic search algorithm A* into an anytime algorithm that finds a sequence of improved solutions and eventually converges to an optimal solution. The approach we adopt uses weighted heuristic search to find an approximate solution quickly, and then continues the weighted search to find improved solutions as well as to improve a bound on the suboptimality of the current solution. When the time available to solve a search problem is limited or uncertain, this creates an anytime heuristic search algorithm that allows a flexible tradeoff between search time and solution quality. We analyze the properties of the resulting Anytime A* algorithm, and consider its performance in three domains; sliding-tile puzzles, STRIPS planning, and multiple sequence alignment. To illustrate the generality of this approach, we also describe how to transform the memory-efficient search algorithm Recursive Best-First Search (RBFS) into an anytime algorithm.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:21:59 GMT" } ]
1,318,464,000,000
[ [ "Hansen", "E. A.", "" ], [ "Zhou", "R.", "" ] ]
1110.2738
Y. Chen
Y. Chen, F. Lin
Discovering Classes of Strongly Equivalent Logic Programs
null
Journal Of Artificial Intelligence Research, Volume 28, pages 431-451, 2007
10.1613/jair.2131
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we apply computer-aided theorem discovery technique to discover theorems about strongly equivalent logic programs under the answer set semantics. Our discovered theorems capture new classes of strongly equivalent logic programs that can lead to new program simplification rules that preserve strong equivalence. Specifically, with the help of computers, we discovered exact conditions that capture the strong equivalence between a rule and the empty set, between two rules, between two rules and one of the two rules, between two rules and another rule, and between three rules and two of the three rules.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:22:15 GMT" } ]
1,318,464,000,000
[ [ "Chen", "Y.", "" ], [ "Lin", "F.", "" ] ]
1110.2739
N. Creignou
N. Creignou, H. Daude, U. Egly
Phase Transition for Random Quantified XOR-Formulas
null
Journal Of Artificial Intelligence Research, Volume 29, pages 1-18, 2007
10.1613/jair.2120
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The QXORSAT problem is the quantified version of the satisfiability problem XORSAT in which the connective exclusive-or is used instead of the usual or. We study the phase transition associated with random QXORSAT instances. We give a description of this phase transition in the case of one alternation of quantifiers, thus performing an advanced practical and theoretical study on the phase transition of a quantified roblem.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:22:52 GMT" } ]
1,318,464,000,000
[ [ "Creignou", "N.", "" ], [ "Daude", "H.", "" ], [ "Egly", "U.", "" ] ]
1110.2740
B. Bidyuk
B. Bidyuk, R. Dechter
Cutset Sampling for Bayesian Networks
null
Journal Of Artificial Intelligence Research, Volume 28, pages 1-48, 2007
10.1613/jair.2149
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper presents a new sampling methodology for Bayesian networks that samples only a subset of variables and applies exact inference to the rest. Cutset sampling is a network structure-exploiting application of the Rao-Blackwellisation principle to sampling in Bayesian networks. It improves convergence by exploiting memory-based inference algorithms. It can also be viewed as an anytime approximation of the exact cutset-conditioning algorithm developed by Pearl. Cutset sampling can be implemented efficiently when the sampled variables constitute a loop-cutset of the Bayesian network and, more generally, when the induced width of the networks graph conditioned on the observed sampled variables is bounded by a constant w. We demonstrate empirically the benefit of this scheme on a range of benchmarks.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:23:15 GMT" } ]
1,318,464,000,000
[ [ "Bidyuk", "B.", "" ], [ "Dechter", "R.", "" ] ]
1110.2741
C. Pralet
C. Pralet, T. Schiex, G. Verfaillie
An Algebraic Graphical Model for Decision with Uncertainties, Feasibilities, and Utilities
null
Journal Of Artificial Intelligence Research, Volume 29, pages 421-489, 2007
10.1613/jair.2151
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Numerous formalisms and dedicated algorithms have been designed in the last decades to model and solve decision making problems. Some formalisms, such as constraint networks, can express "simple" decision problems, while others are designed to take into account uncertainties, unfeasible decisions, and utilities. Even in a single formalism, several variants are often proposed to model different types of uncertainty (probability, possibility...) or utility (additive or not). In this article, we introduce an algebraic graphical model that encompasses a large number of such formalisms: (1) we first adapt previous structures from Friedman, Chu and Halpern for representing uncertainty, utility, and expected utility in order to deal with generic forms of sequential decision making; (2) on these structures, we then introduce composite graphical models that express information via variables linked by "local" functions, thanks to conditional independence; (3) on these graphical models, we finally define a simple class of queries which can represent various scenarios in terms of observabilities and controllabilities. A natural decision-tree semantics for such queries is completed by an equivalent operational semantics, which induces generic algorithms. The proposed framework, called the Plausibility-Feasibility-Utility (PFU) framework, not only provides a better understanding of the links between existing formalisms, but it also covers yet unpublished frameworks (such as possibilistic influence diagrams) and unifies formalisms such as quantified boolean formulas and influence diagrams. Our backtrack and variable elimination generic algorithms are a first step towards unified algorithms.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:23:33 GMT" } ]
1,318,464,000,000
[ [ "Pralet", "C.", "" ], [ "Schiex", "T.", "" ], [ "Verfaillie", "G.", "" ] ]
1110.2742
T. Di Noia
T. Di Noia, E. Di Sciascio, F. M. Donini
Semantic Matchmaking as Non-Monotonic Reasoning: A Description Logic Approach
null
Journal Of Artificial Intelligence Research, Volume 29, pages 269-307, 2007
10.1613/jair.2153
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Matchmaking arises when supply and demand meet in an electronic marketplace, or when agents search for a web service to perform some task, or even when recruiting agencies match curricula and job profiles. In such open environments, the objective of a matchmaking process is to discover best available offers to a given request. We address the problem of matchmaking from a knowledge representation perspective, with a formalization based on Description Logics. We devise Concept Abduction and Concept Contraction as non-monotonic inferences in Description Logics suitable for modeling matchmaking in a logical framework, and prove some related complexity results. We also present reasonable algorithms for semantic matchmaking based on the devised inferences, and prove that they obey to some commonsense properties. Finally, we report on the implementation of the proposed matchmaking framework, which has been used both as a mediator in e-marketplaces and for semantic web services discovery.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:23:49 GMT" } ]
1,318,464,000,000
[ [ "Di Noia", "T.", "" ], [ "Di Sciascio", "E.", "" ], [ "Donini", "F. M.", "" ] ]
1110.2743
J. C. Beck
J. C. Beck
Solution-Guided Multi-Point Constructive Search for Job Shop Scheduling
null
Journal Of Artificial Intelligence Research, Volume 29, pages 49-77, 2007
10.1613/jair.2169
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Solution-Guided Multi-Point Constructive Search (SGMPCS) is a novel constructive search technique that performs a series of resource-limited tree searches where each search begins either from an empty solution (as in randomized restart) or from a solution that has been encountered during the search. A small number of these "elite solutions is maintained during the search. We introduce the technique and perform three sets of experiments on the job shop scheduling problem. First, a systematic, fully crossed study of SGMPCS is carried out to evaluate the performance impact of various parameter settings. Second, we inquire into the diversity of the elite solution set, showing, contrary to expectations, that a less diverse set leads to stronger performance. Finally, we compare the best parameter setting of SGMPCS from the first two experiments to chronological backtracking, limited discrepancy search, randomized restart, and a sophisticated tabu search algorithm on a set of well-known benchmark problems. Results demonstrate that SGMPCS is significantly better than the other constructive techniques tested, though lags behind the tabu search.
[ { "version": "v1", "created": "Wed, 12 Oct 2011 18:24:37 GMT" } ]
1,318,464,000,000
[ [ "Beck", "J. C.", "" ] ]
1110.3002
Carlos Gershenson
Carlos Gershenson
Are Minds Computable?
7 pages, comments welcome
null
null
C3 Report 2011.08
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This essay explores the limits of Turing machines concerning the modeling of minds and suggests alternatives to go beyond those limits.
[ { "version": "v1", "created": "Thu, 13 Oct 2011 17:26:03 GMT" } ]
1,318,550,400,000
[ [ "Gershenson", "Carlos", "" ] ]
1110.3385
Tshilidzi Marwala
Pretesh Patel and Tshilidzi Marwala
Fuzzy Inference Systems Optimization
Paper Submitted to INTECH
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper compares various optimization methods for fuzzy inference system optimization. The optimization methods compared are genetic algorithm, particle swarm optimization and simulated annealing. When these techniques were implemented it was observed that the performance of each technique within the fuzzy inference system classification was context dependent.
[ { "version": "v1", "created": "Sat, 15 Oct 2011 05:39:34 GMT" } ]
1,318,896,000,000
[ [ "Patel", "Pretesh", "" ], [ "Marwala", "Tshilidzi", "" ] ]
1110.3888
Yuming Xu
Xu Yuming
Handling controversial arguments by matrix
21 pages, 2 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce matrix and its block to the Dung's theory of argumentation framework. It is showed that each argumentation framework has a matrix representation, and the indirect attack relation and indirect defence relation can be characterized by computing the matrix. This provide a powerful mathematics way to determine the "controversial arguments" in an argumentation framework. Also, we introduce several kinds of blocks based on the matrix, and various prudent semantics of argumentation frameworks can all be determined by computing and comparing the matrices and their blocks which we have defined. In contrast with traditional method of directed graph, the matrix method has an excellent advantage: computability(even can be realized on computer easily). So, there is an intensive perspective to import the theory of matrix to the research of argumentation frameworks and its related areas.
[ { "version": "v1", "created": "Tue, 18 Oct 2011 07:01:28 GMT" }, { "version": "v2", "created": "Thu, 20 Oct 2011 05:58:23 GMT" } ]
1,319,155,200,000
[ [ "Yuming", "Xu", "" ] ]
1110.4076
V. Bulitko
V. Bulitko, G. Lee
Learning in Real-Time Search: A Unifying Framework
null
Journal Of Artificial Intelligence Research, Volume 25, pages 119-157, 2006
10.1613/jair.1789
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time search methods are suited for tasks in which the agent is interacting with an initially unknown environment in real time. In such simultaneous planning and learning problems, the agent has to select its actions in a limited amount of time, while sensing only a local part of the environment centered at the agents current location. Real-time heuristic search agents select actions using a limited lookahead search and evaluating the frontier states with a heuristic function. Over repeated experiences, they refine heuristic values of states to avoid infinite loops and to converge to better solutions. The wide spread of such settings in autonomous software and hardware agents has led to an explosion of real-time search algorithms over the last two decades. Not only is a potential user confronted with a hodgepodge of algorithms, but he also faces the choice of control parameters they use. In this paper we address both problems. The first contribution is an introduction of a simple three-parameter framework (named LRTS) which extracts the core ideas behind many existing algorithms. We then prove that LRTA*, epsilon-LRTA*, SLA*, and gamma-Trap algorithms are special cases of our framework. Thus, they are unified and extended with additional features. Second, we prove completeness and convergence of any algorithm covered by the LRTS framework. Third, we prove several upper-bounds relating the control parameters and solution quality. Finally, we analyze the influence of the three control parameters empirically in the realistic scalable domains of real-time navigation on initially unknown maps from a commercial role-playing game as well as routing in ad hoc sensor networks.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 17:00:02 GMT" } ]
1,318,982,400,000
[ [ "Bulitko", "V.", "" ], [ "Lee", "G.", "" ] ]
1110.4719
Thierry Petit
Thierry Petit, Nicolas Beldiceanu and Xavier Lorca
A Generalized Arc-Consistency Algorithm for a Class of Counting Constraints: Revised Edition that Incorporates One Correction
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces the SEQ BIN meta-constraint with a polytime algorithm achieving general- ized arc-consistency according to some properties. SEQ BIN can be used for encoding counting con- straints such as CHANGE, SMOOTH or INCREAS- ING NVALUE. For some of these constraints and some of their variants GAC can be enforced with a time and space complexity linear in the sum of domain sizes, which improves or equals the best known results of the literature.
[ { "version": "v1", "created": "Fri, 21 Oct 2011 07:49:48 GMT" } ]
1,319,414,400,000
[ [ "Petit", "Thierry", "" ], [ "Beldiceanu", "Nicolas", "" ], [ "Lorca", "Xavier", "" ] ]
1110.5172
Valmi Dufour-Lussier
Valmi Dufour-Lussier (INRIA Lorraine - LORIA), Florence Le Ber (INRIA Lorraine - LORIA, LHyGeS), Jean Lieber (INRIA Lorraine - LORIA)
Quels formalismes temporels pour repr\'esenter des connaissances extraites de textes de recettes de cuisine ?
Repr\'esentation et raisonnement sur le temps et l'espace (2011)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Taaable projet goal is to create a case-based reasoning system for retrieval and adaptation of cooking recipes. Within this framework, we are discussing the temporal aspects of recipes and the means of representing those in order to adapt their text.
[ { "version": "v1", "created": "Mon, 24 Oct 2011 09:33:31 GMT" } ]
1,319,500,800,000
[ [ "Dufour-Lussier", "Valmi", "", "INRIA Lorraine - LORIA" ], [ "Ber", "Florence Le", "", "INRIA\n Lorraine - LORIA, LHyGeS" ], [ "Lieber", "Jean", "", "INRIA Lorraine - LORIA" ] ]
1110.6290
Lars Kotthoff
Ian P. Gent and Chris Jefferson and Lars Kotthoff and Ian Miguel
Modelling Constraint Solver Architecture Design as a Constraint Problem
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Designing component-based constraint solvers is a complex problem. Some components are required, some are optional and there are interdependencies between the components. Because of this, previous approaches to solver design and modification have been ad-hoc and limited. We present a system that transforms a description of the components and the characteristics of the target constraint solver into a constraint problem. Solving this problem yields the description of a valid solver. Our approach represents a significant step towards the automated design and synthesis of constraint solvers that are specialised for individual constraint problem classes or instances.
[ { "version": "v1", "created": "Fri, 28 Oct 2011 10:41:43 GMT" } ]
1,320,019,200,000
[ [ "Gent", "Ian P.", "" ], [ "Jefferson", "Chris", "" ], [ "Kotthoff", "Lars", "" ], [ "Miguel", "Ian", "" ] ]
1110.6589
Amit Mishra
Amit K. Mishra and Chris Baker
A cognitive diversity framework for radar target classification
null
The IET COGnitive systems with Interactive Sensors 2010
null
The IET COGnitive systems with Interactive Sensors 2010
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classification of targets by radar has proved to be notoriously difficult with the best systems still yet to attain sufficiently high levels of performance and reliability. In the current contribution we explore a new design of radar based target recognition, where angular diversity is used in a cognitive manner to attain better performance. Performance is bench- marked against conventional classification schemes. The proposed scheme can easily be extended to cognitive target recognition based on multiple diversity strategies.
[ { "version": "v1", "created": "Sun, 30 Oct 2011 09:26:34 GMT" } ]
1,320,105,600,000
[ [ "Mishra", "Amit K.", "" ], [ "Baker", "Chris", "" ] ]
1111.0039
I. Horrocks
I. Horrocks, J. Z. Pan, G. Stamou, G. Stoilos, V. Tzouvaras
Reasoning with Very Expressive Fuzzy Description Logics
null
Journal Of Artificial Intelligence Research, Volume 30, pages 273-320, 2007
10.1613/jair.2279
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It is widely recognized today that the management of imprecision and vagueness will yield more intelligent and realistic knowledge-based applications. Description Logics (DLs) are a family of knowledge representation languages that have gained considerable attention the last decade, mainly due to their decidability and the existence of empirically high performance of reasoning algorithms. In this paper, we extend the well known fuzzy ALC DL to the fuzzy SHIN DL, which extends the fuzzy ALC DL with transitive role axioms (S), inverse roles (I), role hierarchies (H) and number restrictions (N). We illustrate why transitive role axioms are difficult to handle in the presence of fuzzy interpretations and how to handle them properly. Then we extend these results by adding role hierarchies and finally number restrictions. The main contributions of the paper are the decidability proof of the fuzzy DL languages fuzzy-SI and fuzzy-SHIN, as well as decision procedures for the knowledge base satisfiability problem of the fuzzy-SI and fuzzy-SHIN.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 21:37:41 GMT" } ]
1,320,192,000,000
[ [ "Horrocks", "I.", "" ], [ "Pan", "J. Z.", "" ], [ "Stamou", "G.", "" ], [ "Stoilos", "G.", "" ], [ "Tzouvaras", "V.", "" ] ]
1111.0040
C. M. Li
C. M. Li, F. Manya, J. Planes
New Inference Rules for Max-SAT
null
Journal Of Artificial Intelligence Research, Volume 30, pages 321-359, 2007
10.1613/jair.2215
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exact Max-SAT solvers, compared with SAT solvers, apply little inference at each node of the proof tree. Commonly used SAT inference rules like unit propagation produce a simplified formula that preserves satisfiability but, unfortunately, solving the Max-SAT problem for the simplified formula is not equivalent to solving it for the original formula. In this paper, we define a number of original inference rules that, besides being applied efficiently, transform Max-SAT instances into equivalent Max-SAT instances which are easier to solve. The soundness of the rules, that can be seen as refinements of unit resolution adapted to Max-SAT, are proved in a novel and simple way via an integer programming transformation. With the aim of finding out how powerful the inference rules are in practice, we have developed a new Max-SAT solver, called MaxSatz, which incorporates those rules, and performed an experimental investigation. The results provide empirical evidence that MaxSatz is very competitive, at least, on random Max-2SAT, random Max-3SAT, Max-Cut, and Graph 3-coloring instances, as well as on the benchmarks from the Max-SAT Evaluation 2006.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 21:39:39 GMT" } ]
1,320,192,000,000
[ [ "Li", "C. M.", "" ], [ "Manya", "F.", "" ], [ "Planes", "J.", "" ] ]
1111.0043
B. Faltings
B. Faltings, R. Jurca
Obtaining Reliable Feedback for Sanctioning Reputation Mechanisms
null
Journal Of Artificial Intelligence Research, Volume 29, pages 391-419, 2007
10.1613/jair.2243
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reputation mechanisms offer an effective alternative to verification authorities for building trust in electronic markets with moral hazard. Future clients guide their business decisions by considering the feedback from past transactions; if truthfully exposed, cheating behavior is sanctioned and thus becomes irrational. It therefore becomes important to ensure that rational clients have the right incentives to report honestly. As an alternative to side-payment schemes that explicitly reward truthful reports, we show that honesty can emerge as a rational behavior when clients have a repeated presence in the market. To this end we describe a mechanism that supports an equilibrium where truthful feedback is obtained. Then we characterize the set of pareto-optimal equilibria of the mechanism, and derive an upper bound on the percentage of false reports that can be recorded by the mechanism. An important role in the existence of this bound is played by the fact that rational clients can establish a reputation for reporting honestly.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 21:43:18 GMT" } ]
1,320,192,000,000
[ [ "Faltings", "B.", "" ], [ "Jurca", "R.", "" ] ]
1111.0044
C. Domshlak
C. Domshlak, J. Hoffmann
Probabilistic Planning via Heuristic Forward Search and Weighted Model Counting
null
Journal Of Artificial Intelligence Research, Volume 30, pages 565-620, 2007
10.1613/jair.2289
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new algorithm for probabilistic planning with no observability. Our algorithm, called Probabilistic-FF, extends the heuristic forward-search machinery of Conformant-FF to problems with probabilistic uncertainty about both the initial state and action effects. Specifically, Probabilistic-FF combines Conformant-FFs techniques with a powerful machinery for weighted model counting in (weighted) CNFs, serving to elegantly define both the search space and the heuristic function. Our evaluation of Probabilistic-FF shows its fine scalability in a range of probabilistic domains, constituting a several orders of magnitude improvement over previous results in this area. We use a problematic case to point out the main open issue to be addressed by further research.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 21:47:05 GMT" } ]
1,320,192,000,000
[ [ "Domshlak", "C.", "" ], [ "Hoffmann", "J.", "" ] ]
1111.0049
B. Glimm
Birte Glimm, Ian Horrocks, Carsten Lutz, Ulrike Sattler
Conjunctive Query Answering for the Description Logic SHIQ
null
Journal Of Artificial Intelligence Research, Volume 31, pages 157-204, 2008
10.1613/jair.2372
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Conjunctive queries play an important role as an expressive query language for Description Logics (DLs). Although modern DLs usually provide for transitive roles, conjunctive query answering over DL knowledge bases is only poorly understood if transitive roles are admitted in the query. In this paper, we consider unions of conjunctive queries over knowledge bases formulated in the prominent DL SHIQ and allow transitive roles in both the query and the knowledge base. We show decidability of query answering in this setting and establish two tight complexity bounds: regarding combined complexity, we prove that there is a deterministic algorithm for query answering that needs time single exponential in the size of the KB and double exponential in the size of the query, which is optimal. Regarding data complexity, we prove containment in co-NP.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:01:42 GMT" } ]
1,320,192,000,000
[ [ "Glimm", "Birte", "" ], [ "Horrocks", "Ian", "" ], [ "Lutz", "Carsten", "" ], [ "Sattler", "Ulrike", "" ] ]
1111.0051
George M. Coghill
George M. Coghill, Ross D. King, Ashwin Srinivasan
Qualitative System Identification from Imperfect Data
null
Journal Of Artificial Intelligence Research, Volume 32, pages 825-877, 2008
10.1613/jair.2374
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Experience in the physical sciences suggests that the only realistic means of understanding complex systems is through the use of mathematical models. Typically, this has come to mean the identification of quantitative models expressed as differential equations. Quantitative modelling works best when the structure of the model (i.e., the form of the equations) is known; and the primary concern is one of estimating the values of the parameters in the model. For complex biological systems, the model-structure is rarely known and the modeler has to deal with both model-identification and parameter-estimation. In this paper we are concerned with providing automated assistance to the first of these problems. Specifically, we examine the identification by machine of the structural relationships between experimentally observed variables. These relationship will be expressed in the form of qualitative abstractions of a quantitative model. Such qualitative models may not only provide clues to the precise quantitative model, but also assist in understanding the essence of that model. Our position in this paper is that background knowledge incorporating system modelling principles can be used to constrain effectively the set of good qualitative models. Utilising the model-identification framework provided by Inductive Logic Programming (ILP) we present empirical support for this position using a series of increasingly complex artificial datasets. The results are obtained with qualitative and quantitative data subject to varying amounts of noise and different degrees of sparsity. The results also point to the presence of a set of qualitative states, which we term kernel subsets, that may be necessary for a qualitative model-learner to learn correct models. We demonstrate scalability of the method to biological system modelling by identification of the glycolysis metabolic pathway from data.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:02:30 GMT" } ]
1,320,192,000,000
[ [ "Coghill", "George M.", "" ], [ "King", "Ross D.", "" ], [ "Srinivasan", "Ashwin", "" ] ]
1111.0053
Malcolm Ross Kinsella Ryan
Malcolm Ross Kinsella Ryan
Exploiting Subgraph Structure in Multi-Robot Path Planning
null
Journal Of Artificial Intelligence Research, Volume 31, pages 497-542, 2008
10.1613/jair.2408
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-robot path planning is difficult due to the combinatorial explosion of the search space with every new robot added. Complete search of the combined state-space soon becomes intractable. In this paper we present a novel form of abstraction that allows us to plan much more efficiently. The key to this abstraction is the partitioning of the map into subgraphs of known structure with entry and exit restrictions which we can represent compactly. Planning then becomes a search in the much smaller space of subgraph configurations. Once an abstract plan is found, it can be quickly resolved into a correct (but possibly sub-optimal) concrete plan without the need for further search. We prove that this technique is sound and complete and demonstrate its practical effectiveness on a real map. A contending solution, prioritised planning, is also evaluated and shown to have similar performance albeit at the cost of completeness. The two approaches are not necessarily conflicting; we demonstrate how they can be combined into a single algorithm which outperforms either approach alone.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:09:48 GMT" } ]
1,320,192,000,000
[ [ "Ryan", "Malcolm Ross Kinsella", "" ] ]
1111.0055
Anastasia Analyti
Anastasia Analyti, Grigoris Antoniou, Carlos Viegas Dam\'asio, Gerd Wagner
Extended RDF as a Semantic Foundation of Rule Markup Languages
null
Journal Of Artificial Intelligence Research, Volume 32, pages 37-94, 2008
10.1613/jair.2425
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ontologies and automated reasoning are the building blocks of the Semantic Web initiative. Derivation rules can be included in an ontology to define derived concepts, based on base concepts. For example, rules allow to define the extension of a class or property, based on a complex relation between the extensions of the same or other classes and properties. On the other hand, the inclusion of negative information both in the form of negation-as-failure and explicit negative information is also needed to enable various forms of reasoning. In this paper, we extend RDF graphs with weak and strong negation, as well as derivation rules. The ERDF stable model semantics of the extended framework (Extended RDF) is defined, extending RDF(S) semantics. A distinctive feature of our theory, which is based on Partial Logic, is that both truth and falsity extensions of properties and classes are considered, allowing for truth value gaps. Our framework supports both closed-world and open-world reasoning through the explicit representation of the particular closed-world assumptions and the ERDF ontological categories of total properties and total classes.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:11:46 GMT" } ]
1,320,192,000,000
[ [ "Analyti", "Anastasia", "" ], [ "Antoniou", "Grigoris", "" ], [ "Damásio", "Carlos Viegas", "" ], [ "Wagner", "Gerd", "" ] ]
1111.0056
Omer Gim\'enez
Omer Gim\'enez, Anders Jonsson
The Complexity of Planning Problems With Simple Causal Graphs
null
Journal Of Artificial Intelligence Research, Volume 31, pages 319-351, 2008
10.1613/jair.2432
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present three new complexity results for classes of planning problems with simple causal graphs. First, we describe a polynomial-time algorithm that uses macros to generate plans for the class 3S of planning problems with binary state variables and acyclic causal graphs. This implies that plan generation may be tractable even when a planning problem has an exponentially long minimal solution. We also prove that the problem of plan existence for planning problems with multi-valued variables and chain causal graphs is NP-hard. Finally, we show that plan existence for planning problems with binary state variables and polytree causal graphs is NP-complete.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:12:22 GMT" } ]
1,320,192,000,000
[ [ "Giménez", "Omer", "" ], [ "Jonsson", "Anders", "" ] ]
1111.0059
Subbarao Kambhampati
Menkes Hector Louis van den Briel, Thomas Vossen, Subbarao Kambhampati
Loosely Coupled Formulations for Automated Planning: An Integer Programming Perspective
null
Journal Of Artificial Intelligence Research, Volume 31, pages 217-257, 2008
10.1613/jair.2443
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We represent planning as a set of loosely coupled network flow problems, where each network corresponds to one of the state variables in the planning domain. The network nodes correspond to the state variable values and the network arcs correspond to the value transitions. The planning problem is to find a path (a sequence of actions) in each network such that, when merged, they constitute a feasible plan. In this paper we present a number of integer programming formulations that model these loosely coupled networks with varying degrees of flexibility. Since merging may introduce exponentially many ordering constraints we implement a so-called branch-and-cut algorithm, in which these constraints are dynamically generated and added to the formulation when needed. Our results are very promising, they improve upon previous planning as integer programming approaches and lay the foundation for integer programming approaches for cost optimal planning.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:16:02 GMT" } ]
1,320,192,000,000
[ [ "Briel", "Menkes Hector Louis van den", "" ], [ "Vossen", "Thomas", "" ], [ "Kambhampati", "Subbarao", "" ] ]
1111.0060
J. Christopher Beck
Daria Terekhov, J. Christopher Beck
A Constraint Programming Approach for Solving a Queueing Control Problem
null
Journal Of Artificial Intelligence Research, Volume 32, pages 123-167, 2008
10.1613/jair.2446
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a facility with front room and back room operations, it is useful to switch workers between the rooms in order to cope with changing customer demand. Assuming stochastic customer arrival and service times, we seek a policy for switching workers such that the expected customer waiting time is minimized while the expected back room staffing is sufficient to perform all work. Three novel constraint programming models and several shaving procedures for these models are presented. Experimental results show that a model based on closed-form expressions together with a combination of shaving procedures is the most efficient. This model is able to find and prove optimal solutions for many problem instances within a reasonable run-time. Previously, the only available approach was a heuristic algorithm. Furthermore, a hybrid method combining the heuristic and the best constraint programming method is shown to perform as well as the heuristic in terms of solution quality over time, while achieving the same performance in terms of proving optimality as the pure constraint programming model. This is the first work of which we are aware that solves such queueing-based problems with constraint programming.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:16:41 GMT" } ]
1,320,192,000,000
[ [ "Terekhov", "Daria", "" ], [ "Beck", "J. Christopher", "" ] ]
1111.0062
Frans A. Oliehoek
Frans A. Oliehoek, Matthijs T. J. Spaan, Nikos Vlassis
Optimal and Approximate Q-value Functions for Decentralized POMDPs
null
Journal Of Artificial Intelligence Research, Volume 32, pages 289-353, 2008
10.1613/jair.2447
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision-theoretic planning is a popular approach to sequential decision making problems, because it treats uncertainty in sensing and acting in a principled way. In single-agent frameworks like MDPs and POMDPs, planning can be carried out by resorting to Q-value functions: an optimal Q-value function Q* is computed in a recursive manner by dynamic programming, and then an optimal policy is extracted from Q*. In this paper we study whether similar Q-value functions can be defined for decentralized POMDP models (Dec-POMDPs), and how policies can be extracted from such value functions. We define two forms of the optimal Q-value function for Dec-POMDPs: one that gives a normative description as the Q-value function of an optimal pure joint policy and another one that is sequentially rational and thus gives a recipe for computation. This computation, however, is infeasible for all but the smallest problems. Therefore, we analyze various approximate Q-value functions that allow for efficient computation. We describe how they relate, and we prove that they all provide an upper bound to the optimal Q-value function Q*. Finally, unifying some previous approaches for solving Dec-POMDPs, we describe a family of algorithms for extracting policies from such Q-value functions, and perform an experimental evaluation on existing test problems, including a new firefighting benchmark problem.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:17:44 GMT" } ]
1,320,192,000,000
[ [ "Oliehoek", "Frans A.", "" ], [ "Spaan", "Matthijs T. J.", "" ], [ "Vlassis", "Nikos", "" ] ]
1111.0065
Claudia V. Goldman
Claudia V. Goldman, Shlomo Zilberstein
Communication-Based Decomposition Mechanisms for Decentralized MDPs
null
Journal Of Artificial Intelligence Research, Volume 32, pages 169-202, 2008
10.1613/jair.2466
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-agent planning in stochastic environments can be framed formally as a decentralized Markov decision problem. Many real-life distributed problems that arise in manufacturing, multi-robot coordination and information gathering scenarios can be formalized using this framework. However, finding the optimal solution in the general case is hard, limiting the applicability of recently developed algorithms. This paper provides a practical approach for solving decentralized control problems when communication among the decision makers is possible, but costly. We develop the notion of communication-based mechanism that allows us to decompose a decentralized MDP into multiple single-agent problems. In this framework, referred to as decentralized semi-Markov decision process with direct communication (Dec-SMDP-Com), agents operate separately between communications. We show that finding an optimal mechanism is equivalent to solving optimally a Dec-SMDP-Com. We also provide a heuristic search algorithm that converges on the optimal decomposition. Restricting the decomposition to some specific types of local behaviors reduces significantly the complexity of planning. In particular, we present a polynomial-time algorithm for the case in which individual agents perform goal-oriented behaviors between communications. The paper concludes with an additional tractable algorithm that enables the introduction of human knowledge, thereby reducing the overall problem to finding the best time to communicate. Empirical results show that these approaches provide good approximate solutions.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:22:03 GMT" } ]
1,320,192,000,000
[ [ "Goldman", "Claudia V.", "" ], [ "Zilberstein", "Shlomo", "" ] ]
1111.0067
Joseph Culberson
Fan Yang, Joseph Culberson, Robert Holte, Uzi Zahavi, Ariel Felner
A General Theory of Additive State Space Abstractions
null
Journal Of Artificial Intelligence Research, Volume 32, pages 631-662, 2008
10.1613/jair.2486
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Informally, a set of abstractions of a state space S is additive if the distance between any two states in S is always greater than or equal to the sum of the corresponding distances in the abstract spaces. The first known additive abstractions, called disjoint pattern databases, were experimentally demonstrated to produce state of the art performance on certain state spaces. However, previous applications were restricted to state spaces with special properties, which precludes disjoint pattern databases from being defined for several commonly used testbeds, such as Rubiks Cube, TopSpin and the Pancake puzzle. In this paper we give a general definition of additive abstractions that can be applied to any state space and prove that heuristics based on additive abstractions are consistent as well as admissible. We use this new definition to create additive abstractions for these testbeds and show experimentally that well chosen additive abstractions can reduce search time substantially for the (18,4)-TopSpin puzzle and by three orders of magnitude over state of the art methods for the 17-Pancake puzzle. We also derive a way of testing if the heuristic value returned by additive abstractions is provably too low and show that the use of this test can reduce search time for the 15-puzzle and TopSpin by roughly a factor of two.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:26:44 GMT" } ]
1,320,192,000,000
[ [ "Yang", "Fan", "" ], [ "Culberson", "Joseph", "" ], [ "Holte", "Robert", "" ], [ "Zahavi", "Uzi", "" ], [ "Felner", "Ariel", "" ] ]
1111.0068
Saket Joshi
Chenggang Wang, Saket Joshi, Roni Khardon
First Order Decision Diagrams for Relational MDPs
null
Journal Of Artificial Intelligence Research, Volume 31, pages 431-472, 2008
10.1613/jair.2489
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Markov decision processes capture sequential decision making under uncertainty, where an agent must choose actions so as to optimize long term reward. The paper studies efficient reasoning mechanisms for Relational Markov Decision Processes (RMDP) where world states have an internal relational structure that can be naturally described in terms of objects and relations among them. Two contributions are presented. First, the paper develops First Order Decision Diagrams (FODD), a new compact representation for functions over relational structures, together with a set of operators to combine FODDs, and novel reduction techniques to keep the representation small. Second, the paper shows how FODDs can be used to develop solutions for RMDPs, where reasoning is performed at the abstract level and the resulting optimal policy is independent of domain size (number of objects) or instantiation. In particular, a variant of the value iteration algorithm is developed by using special operations over FODDs, and the algorithm is shown to converge to the optimal policy.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:27:57 GMT" } ]
1,320,192,000,000
[ [ "Wang", "Chenggang", "" ], [ "Joshi", "Saket", "" ], [ "Khardon", "Roni", "" ] ]
1111.0860
E. Giunchiglia
E. Giunchiglia, M. Narizzano, A. Tacchella
Clause/Term Resolution and Learning in the Evaluation of Quantified Boolean Formulas
null
Journal Of Artificial Intelligence Research, Volume 26, pages 371-416, 2006
10.1613/jair.1959
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Resolution is the rule of inference at the basis of most procedures for automated reasoning. In these procedures, the input formula is first translated into an equisatisfiable formula in conjunctive normal form (CNF) and then represented as a set of clauses. Deduction starts by inferring new clauses by resolution, and goes on until the empty clause is generated or satisfiability of the set of clauses is proven, e.g., because no new clauses can be generated. In this paper, we restrict our attention to the problem of evaluating Quantified Boolean Formulas (QBFs). In this setting, the above outlined deduction process is known to be sound and complete if given a formula in CNF and if a form of resolution, called Q-resolution, is used. We introduce Q-resolution on terms, to be used for formulas in disjunctive normal form. We show that the computation performed by most of the available procedures for QBFs --based on the Davis-Logemann-Loveland procedure (DLL) for propositional satisfiability-- corresponds to a tree in which Q-resolution on terms and clauses alternate. This poses the theoretical bases for the introduction of learning, corresponding to recording Q-resolution formulas associated with the nodes of the tree. We discuss the problems related to the introduction of learning in DLL based procedures, and present solutions extending state-of-the-art proposals coming from the literature on propositional satisfiability. Finally, we show that our DLL based solver extended with learning, performs significantly better on benchmarks used in the 2003 QBF solvers comparative evaluation.
[ { "version": "v1", "created": "Mon, 26 Sep 2011 18:43:49 GMT" } ]
1,320,364,800,000
[ [ "Giunchiglia", "E.", "" ], [ "Narizzano", "M.", "" ], [ "Tacchella", "A.", "" ] ]
1111.1321
Oleg Varlamov Oleg
Oleg O. Varlamov
MIVAR: Transition from Productions to Bipartite Graphs MIVAR Nets and Practical Realization of Automated Constructor of Algorithms Handling More than Three Million Production Rules
23 pages, 21 figures
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The theoretical transition from the graphs of production systems to the bipartite graphs of the MIVAR nets is shown. Examples of the implementation of the MIVAR nets in the formalisms of matrixes and graphs are given. The linear computational complexity of algorithms for automated building of objects and rules of the MIVAR nets is theoretically proved. On the basis of the MIVAR nets the UDAV software complex is developed, handling more than 1.17 million objects and more than 3.5 million rules on ordinary computers. The results of experiments that confirm a linear computational complexity of the MIVAR method of information processing are given. Keywords: MIVAR, MIVAR net, logical inference, computational complexity, artificial intelligence, intelligent systems, expert systems, General Problem Solver.
[ { "version": "v1", "created": "Sat, 5 Nov 2011 15:26:15 GMT" } ]
1,320,710,400,000
[ [ "Varlamov", "Oleg O.", "" ] ]
1111.1486
Yisong Wang
Yisong Wang and Jia-Huai You and Li Yan Yuan and Yi-Dong Shen and Thomas Eiter
Embedding Description Logic Programs into Default Logic
53 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Description logic programs (dl-programs) under the answer set semantics formulated by Eiter {\em et al.} have been considered as a prominent formalism for integrating rules and ontology knowledge bases. A question of interest has been whether dl-programs can be captured in a general formalism of nonmonotonic logic. In this paper, we study the possibility of embedding dl-programs into default logic. We show that dl-programs under the strong and weak answer set semantics can be embedded in default logic by combining two translations, one of which eliminates the constraint operator from nonmonotonic dl-atoms and the other translates a dl-program into a default theory. For dl-programs without nonmonotonic dl-atoms but with the negation-as-failure operator, our embedding is polynomial, faithful, and modular. In addition, our default logic encoding can be extended in a simple way to capture recently proposed weakly well-supported answer set semantics, for arbitrary dl-programs. These results reinforce the argument that default logic can serve as a fruitful foundation for query-based approaches to integrating ontology and rules. With its simple syntax and intuitive semantics, plus available computational results, default logic can be considered an attractive approach to integration of ontology and rules.
[ { "version": "v1", "created": "Mon, 7 Nov 2011 04:39:56 GMT" } ]
1,320,710,400,000
[ [ "Wang", "Yisong", "" ], [ "You", "Jia-Huai", "" ], [ "Yuan", "Li Yan", "" ], [ "Shen", "Yi-Dong", "" ], [ "Eiter", "Thomas", "" ] ]
1111.1941
Jean Vincent Fonou Dombeu
Jean Vincent Fonou-Dombeu and Magda Huisman
Semantic-Driven e-Government: Application of Uschold and King Ontology Building Methodology for Semantic Ontology Models Development
20 pages, 6 figures
International Journal of Web & Semantic Technology (IJWesT) Vol. 2, No. 4, October 2011, 1-20
10.5121/ijwest.2011.2401
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electronic government (e-government) has been one of the most active areas of ontology development during the past six years. In e-government, ontologies are being used to describe and specify e-government services (e-services) because they enable easy composition, matching, mapping and merging of various e-government services. More importantly, they also facilitate the semantic integration and interoperability of e-government services. However, it is still unclear in the current literature how an existing ontology building methodology can be applied to develop semantic ontology models in a government service domain. In this paper the Uschold and King ontology building methodology is applied to develop semantic ontology models in a government service domain. Firstly, the Uschold and King methodology is presented, discussed and applied to build a government domain ontology. Secondly, the domain ontology is evaluated for semantic consistency using its semi-formal representation in Description Logic. Thirdly, an alignment of the domain ontology with the Descriptive Ontology for Linguistic and Cognitive Engineering (DOLCE) upper level ontology is drawn to allow its wider visibility and facilitate its integration with existing metadata standard. Finally, the domain ontology is formally written in Web Ontology Language (OWL) to enable its automatic processing by computers. The study aims to provide direction for the application of existing ontology building methodologies in the Semantic Web development processes of e-government domain specific ontology models; which would enable their repeatability in other e-government projects and strengthen the adoption of semantic technologies in e-government.
[ { "version": "v1", "created": "Tue, 8 Nov 2011 15:40:10 GMT" } ]
1,320,796,800,000
[ [ "Fonou-Dombeu", "Jean Vincent", "" ], [ "Huisman", "Magda", "" ] ]
1111.2249
H. H. Hoos
Lin Xu, Frank Hutter, Holger H. Hoos, Kevin Leyton-Brown
SATzilla: Portfolio-based Algorithm Selection for SAT
null
Journal Of Artificial Intelligence Research, Volume 32, pages 565-606, 2008
10.1613/jair.2490
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been widely observed that there is no single "dominant" SAT solver; instead, different solvers perform best on different instances. Rather than following the traditional approach of choosing the best solver for a given class of instances, we advocate making this decision online on a per-instance basis. Building on previous work, we describe SATzilla, an automated approach for constructing per-instance algorithm portfolios for SAT that use so-called empirical hardness models to choose among their constituent solvers. This approach takes as input a distribution of problem instances and a set of component solvers, and constructs a portfolio optimizing a given objective function (such as mean runtime, percent of instances solved, or score in a competition). The excellent performance of SATzilla was independently verified in the 2007 SAT Competition, where our SATzilla07 solvers won three gold, one silver and one bronze medal. In this article, we go well beyond SATzilla07 by making the portfolio construction scalable and completely automated, and improving it by integrating local search solvers as candidate solvers, by predicting performance score instead of runtime, and by using hierarchical hardness models that take into account different types of SAT instances. We demonstrate the effectiveness of these new techniques in extensive experimental results on data sets including instances from the most recent SAT competition.
[ { "version": "v1", "created": "Mon, 31 Oct 2011 22:28:59 GMT" } ]
1,320,883,200,000
[ [ "Xu", "Lin", "" ], [ "Hutter", "Frank", "" ], [ "Hoos", "Holger H.", "" ], [ "Leyton-Brown", "Kevin", "" ] ]
1111.2763
Nicolaie Popescu-Bodorin
N. Popescu-Bodorin, V.E. Balas, I.M. Motoc
8-Valent Fuzzy Logic for Iris Recognition and Biometry
6 pages, 2 figures, 5th IEEE Int. Symp. on Computational Intelligence and Intelligent Informatics (Floriana, Malta, September 15-17), ISBN: 978-1-4577-1861-8 (electronic), 978-1-4577-1860-1 (print), 2011
Proc. 5th IEEE Int. Symp. on Computational Intelligence and Intelligent Informatics, pp. 149-154, ISBN: 978-1-4577-1861-8 (electronic), 978-1-4577-1860-1 (print), IEEE Press, 2011
10.1109/ISCIII.2011.6069761
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper shows that maintaining logical consistency of an iris recognition system is a matter of finding a suitable partitioning of the input space in enrollable and unenrollable pairs by negotiating the user comfort and the safety of the biometric system. In other words, consistent enrollment is mandatory in order to preserve system consistency. A fuzzy 3-valued disambiguated model of iris recognition is proposed and analyzed in terms of completeness, consistency, user comfort and biometric safety. It is also shown here that the fuzzy 3-valued model of iris recognition is hosted by an 8-valued Boolean algebra of modulo 8 integers that represents the computational formalization in which a biometric system (a software agent) can achieve the artificial understanding of iris recognition in a logically consistent manner.
[ { "version": "v1", "created": "Tue, 8 Nov 2011 21:38:25 GMT" } ]
1,321,228,800,000
[ [ "Popescu-Bodorin", "N.", "" ], [ "Balas", "V. E.", "" ], [ "Motoc", "I. M.", "" ] ]
1111.3690
Nicolas Maudet
Yann Chevaleyre, J\'er\^ome Lang, Nicolas Maudet, J\'er\^ome Monnot, Lirong Xia
New Candidates Welcome! Possible Winners with respect to the Addition of New Candidates
34 pages
Mathematical Social Sciences 64(1), 2012
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In voting contexts, some new candidates may show up in the course of the process. In this case, we may want to determine which of the initial candidates are possible winners, given that a fixed number $k$ of new candidates will be added. We give a computational study of this problem, focusing on scoring rules, and we provide a formal comparison with related problems such as control via adding candidates or cloning.
[ { "version": "v1", "created": "Tue, 15 Nov 2011 23:41:11 GMT" } ]
1,424,131,200,000
[ [ "Chevaleyre", "Yann", "" ], [ "Lang", "Jérôme", "" ], [ "Maudet", "Nicolas", "" ], [ "Monnot", "Jérôme", "" ], [ "Xia", "Lirong", "" ] ]
1111.3934
Bill Hibbard
Bill Hibbard
Model-based Utility Functions
24 pages, extensive revisions
Journal of Artificial General Intelligence 3(1) 1-24, 2012
10.2478/v10229-011-0013-5
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Orseau and Ring, as well as Dewey, have recently described problems, including self-delusion, with the behavior of agents using various definitions of utility functions. An agent's utility function is defined in terms of the agent's history of interactions with its environment. This paper argues, via two examples, that the behavior problems can be avoided by formulating the utility function in two steps: 1) inferring a model of the environment from interactions, and 2) computing utility as a function of the environment model. Basing a utility function on a model that the agent must learn implies that the utility function must initially be expressed in terms of specifications to be matched to structures in the learned model. These specifications constitute prior assumptions about the environment so this approach will not work with arbitrary environments. But the approach should work for agents designed by humans to act in the physical world. The paper also addresses the issue of self-modifying agents and shows that if provided with the possibility to modify their utility functions agents will not choose to do so, under some usual assumptions.
[ { "version": "v1", "created": "Wed, 16 Nov 2011 20:13:54 GMT" }, { "version": "v2", "created": "Sat, 12 May 2012 16:05:46 GMT" } ]
1,337,040,000,000
[ [ "Hibbard", "Bill", "" ] ]
1111.4083
Denis Berthier
Denis Berthier (DSI)
Unbiased Statistics of a CSP - A Controlled-Bias Generator
null
Innovations in Computing Sciences and Software Engineering, (2010) 165-170
10.1007/978-90-481-3660-5_28
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that estimating the complexity (mean and distribution) of the instances of a fixed size Constraint Satisfaction Problem (CSP) can be very hard. We deal with the main two aspects of the problem: defining a measure of complexity and generating random unbiased instances. For the first problem, we rely on a general framework and a measure of complexity we presented at CISSE08. For the generation problem, we restrict our analysis to the Sudoku example and we provide a solution that also explains why it is so difficult.
[ { "version": "v1", "created": "Thu, 17 Nov 2011 13:15:24 GMT" } ]
1,433,289,600,000
[ [ "Berthier", "Denis", "", "DSI" ] ]
1111.4232
Kirill Sorudeykin Mr
Kirill A. Sorudeykin
A Model of Spatial Thinking for Computational Intelligence
8 pages, 5 figures; IEEE East-West Design & Test Symposium, 2011
null
10.1109/EWDTS.2011.6116427
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trying to be effective (no matter who exactly and in what field) a person face the problem which inevitably destroys all our attempts to easily get to a desired goal. The problem is the existence of some insuperable barriers for our mind, anotherwords barriers for principles of thinking. They are our clue and main reason for research. Here we investigate these barriers and their features exposing the nature of mental process. We start from special structures which reflect the ways to define relations between objects. Then we came to realizing about what is the material our mind uses to build thoughts, to make conclusions, to understand, to form reasoning, etc. This can be called a mental dynamics. After this the nature of mental barriers on the required level of abstraction as well as the ways to pass through them became clear. We begin to understand why thinking flows in such a way, with such specifics and with such limitations we can observe in reality. This can help us to be more optimal. At the final step we start to understand, what ma-thematical models can be applied to such a picture. We start to express our thoughts in a language of mathematics, developing an apparatus for our Spatial Theory of Mind, suitable to represent processes and infrastructure of thinking. We use abstract algebra and stay invariant in relation to the nature of objects.
[ { "version": "v1", "created": "Thu, 17 Nov 2011 22:22:21 GMT" } ]
1,479,340,800,000
[ [ "Sorudeykin", "Kirill A.", "" ] ]
1111.5689
Mehdi Kaytoue
Mehdi Kaytoue (INRIA Lorraine - LORIA), Sergei O. Kuznetsov, Amedeo Napoli (INRIA Lorraine - LORIA)
Revisiting Numerical Pattern Mining with Formal Concept Analysis
null
International Joint Conference on Artificial Intelligence (IJCAI) (2011)
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate the problem of mining numerical data in the framework of Formal Concept Analysis. The usual way is to use a scaling procedure --transforming numerical attributes into binary ones-- leading either to a loss of information or of efficiency, in particular w.r.t. the volume of extracted patterns. By contrast, we propose to directly work on numerical data in a more precise and efficient way, and we prove it. For that, the notions of closed patterns, generators and equivalent classes are revisited in the numerical context. Moreover, two original algorithms are proposed and used in an evaluation involving real-world data, showing the predominance of the present approach.
[ { "version": "v1", "created": "Thu, 24 Nov 2011 07:55:16 GMT" } ]
1,322,438,400,000
[ [ "Kaytoue", "Mehdi", "", "INRIA Lorraine - LORIA" ], [ "Kuznetsov", "Sergei O.", "", "INRIA Lorraine - LORIA" ], [ "Napoli", "Amedeo", "", "INRIA Lorraine - LORIA" ] ]
1111.6117
Marcus Hutter
Peter Sunehag and Marcus Hutter
Principles of Solomonoff Induction and AIXI
14 LaTeX pages
Proc. Solomonoff 85th Memorial Conference (SOL 2011) pages 386-398
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We identify principles characterizing Solomonoff Induction by demands on an agent's external behaviour. Key concepts are rationality, computability, indifference and time consistency. Furthermore, we discuss extensions to the full AI case to derive AIXI.
[ { "version": "v1", "created": "Fri, 25 Nov 2011 21:35:29 GMT" } ]
1,405,382,400,000
[ [ "Sunehag", "Peter", "" ], [ "Hutter", "Marcus", "" ] ]
1111.6191
Albrecht Zimmermann
Bj\"orn Bringmann and Siegfried Nijssen and Albrecht Zimmermann
Pattern-Based Classification: A Unifying Perspective
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of patterns in predictive models is a topic that has received a lot of attention in recent years. Pattern mining can help to obtain models for structured domains, such as graphs and sequences, and has been proposed as a means to obtain more accurate and more interpretable models. Despite the large amount of publications devoted to this topic, we believe however that an overview of what has been accomplished in this area is missing. This paper presents our perspective on this evolving area. We identify the principles of pattern mining that are important when mining patterns for models and provide an overview of pattern-based classification methods. We categorize these methods along the following dimensions: (1) whether they post-process a pre-computed set of patterns or iteratively execute pattern mining algorithms; (2) whether they select patterns model-independently or whether the pattern selection is guided by a model. We summarize the results that have been obtained for each of these methods.
[ { "version": "v1", "created": "Sat, 26 Nov 2011 20:11:56 GMT" } ]
1,322,524,800,000
[ [ "Bringmann", "Björn", "" ], [ "Nijssen", "Siegfried", "" ], [ "Zimmermann", "Albrecht", "" ] ]
1111.6401
Hajar Elmaghraoui
Hajar Elmaghraoui, Imane Zaoui, Dalila Chiadmi, Laila Benhlima
Graph based E-Government web service composition
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nowadays, e-government has emerged as a government policy to improve the quality and efficiency of public administrations. By exploiting the potential of new information and communication technologies, government agencies are providing a wide spectrum of online services. These services are composed of several web services that comply with well defined processes. One of the big challenges is the need to optimize the composition of the elementary web services. In this paper, we present a solution for optimizing the computation effort in web service composition. Our method is based on Graph Theory. We model the semantic relationship between the involved web services through a directed graph. Then, we compute all shortest paths using for the first time, an extended version of the Floyd-Warshall algorithm.
[ { "version": "v1", "created": "Mon, 28 Nov 2011 10:52:28 GMT" } ]
1,322,524,800,000
[ [ "Elmaghraoui", "Hajar", "" ], [ "Zaoui", "Imane", "" ], [ "Chiadmi", "Dalila", "" ], [ "Benhlima", "Laila", "" ] ]
1111.6713
Mohammed Elmogy Dr.
Ahmed Tolba and Nabila Eladawi and Mohammed Elmogy
An Enhanced Indexing And Ranking Technique On The Semantic Web
8 pages, 7 figures
IJCSI International Journal of Computer Science Issues, Vol. 8, Issue 5, No 3, 2011, 118-125
null
null
cs.AI
http://creativecommons.org/licenses/by/3.0/
With the fast growth of the Internet, more and more information is available on the Web. The Semantic Web has many features which cannot be handled by using the traditional search engines. It extracts metadata for each discovered Web documents in RDF or OWL formats, and computes relations between documents. We proposed a hybrid indexing and ranking technique for the Semantic Web which finds relevant documents and computes the similarity among a set of documents. First, it returns with the most related document from the repository of Semantic Web Documents (SWDs) by using a modified version of the ObjectRank technique. Then, it creates a sub-graph for the most related SWDs. Finally, It returns the hubs and authorities of these document by using the HITS algorithm. Our technique increases the quality of the results and decreases the execution time of processing the user's query.
[ { "version": "v1", "created": "Tue, 29 Nov 2011 07:24:27 GMT" } ]
1,322,611,200,000
[ [ "Tolba", "Ahmed", "" ], [ "Eladawi", "Nabila", "" ], [ "Elmogy", "Mohammed", "" ] ]
1111.6790
Sao Mai Nguyen
Sao Mai Nguyen (INRIA Bordeaux - Sud-Ouest), Adrien Baranes (INRIA Bordeaux - Sud-Ouest), Pierre-Yves Oudeyer (INRIA Bordeaux - Sud-Ouest)
Constraining the Size Growth of the Task Space with Socially Guided Intrinsic Motivation using Demonstrations
JCAI Workshop on Agents Learning Interactively from Human Teachers (ALIHT), Barcelona : Spain (2011)
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents an algorithm for learning a highly redundant inverse model in continuous and non-preset environments. Our Socially Guided Intrinsic Motivation by Demonstrations (SGIM-D) algorithm combines the advantages of both social learning and intrinsic motivation, to specialise in a wide range of skills, while lessening its dependence on the teacher. SGIM-D is evaluated on a fishing skill learning experiment.
[ { "version": "v1", "created": "Tue, 29 Nov 2011 12:29:27 GMT" } ]
1,322,611,200,000
[ [ "Nguyen", "Sao Mai", "", "INRIA Bordeaux - Sud-Ouest" ], [ "Baranes", "Adrien", "", "INRIA\n Bordeaux - Sud-Ouest" ], [ "Oudeyer", "Pierre-Yves", "", "INRIA Bordeaux - Sud-Ouest" ] ]
1112.0508
Weiwei Cheng
Weiwei Cheng, Eyke H\"ullermeier
Label Ranking with Abstention: Predicting Partial Orders by Thresholding Probability Distributions (Extended Abstract)
4 pages, 1 figure, appeared at NIPS 2011 Choice Models and Preference Learning workshop
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider an extension of the setting of label ranking, in which the learner is allowed to make predictions in the form of partial instead of total orders. Predictions of that kind are interpreted as a partial abstention: If the learner is not sufficiently certain regarding the relative order of two alternatives, it may abstain from this decision and instead declare these alternatives as being incomparable. We propose a new method for learning to predict partial orders that improves on an existing approach, both theoretically and empirically. Our method is based on the idea of thresholding the probabilities of pairwise preferences between labels as induced by a predicted (parameterized) probability distribution on the set of all rankings.
[ { "version": "v1", "created": "Fri, 2 Dec 2011 17:09:43 GMT" } ]
1,323,043,200,000
[ [ "Cheng", "Weiwei", "" ], [ "Hüllermeier", "Eyke", "" ] ]
1112.1489
Wan-Li Chen
Wan-Li Chen
Multi-granular Perspectives on Covering
12 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Covering model provides a general framework for granular computing in that overlapping among granules are almost indispensable. For any given covering, both intersection and union of covering blocks containing an element are exploited as granules to form granular worlds at different abstraction levels, respectively, and transformations among these different granular worlds are also discussed. As an application of the presented multi-granular perspective on covering, relational interpretation and axiomization of four types of covering based rough upper approximation operators are investigated, which can be dually applied to lower ones.
[ { "version": "v1", "created": "Wed, 7 Dec 2011 07:11:56 GMT" } ]
1,323,302,400,000
[ [ "Chen", "Wan-Li", "" ] ]
1112.2113
Varun Raj Kompella
Varun Raj Kompella, Matthew Luciw and Juergen Schmidhuber
Incremental Slow Feature Analysis: Adaptive and Episodic Learning from High-Dimensional Input Streams
null
Neural Computation, 2012, Vol. 24, No. 11, Pages 2994-3024
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Slow Feature Analysis (SFA) extracts features representing the underlying causes of changes within a temporally coherent high-dimensional raw sensory input signal. Our novel incremental version of SFA (IncSFA) combines incremental Principal Components Analysis and Minor Components Analysis. Unlike standard batch-based SFA, IncSFA adapts along with non-stationary environments, is amenable to episodic training, is not corrupted by outliers, and is covariance-free. These properties make IncSFA a generally useful unsupervised preprocessor for autonomous learning agents and robots. In IncSFA, the CCIPCA and MCA updates take the form of Hebbian and anti-Hebbian updating, extending the biological plausibility of SFA. In both single node and deep network versions, IncSFA learns to encode its input streams (such as high-dimensional video) by informative slow features representing meaningful abstract environmental properties. It can handle cases where batch SFA fails.
[ { "version": "v1", "created": "Fri, 9 Dec 2011 15:01:25 GMT" } ]
1,349,913,600,000
[ [ "Kompella", "Varun Raj", "" ], [ "Luciw", "Matthew", "" ], [ "Schmidhuber", "Juergen", "" ] ]
1112.2640
C\`esar Ferri
Jos\'e Hern\'andez-Orallo, Peter Flach, C\`esar Ferri
Threshold Choice Methods: the Missing Link
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many performance metrics have been introduced for the evaluation of classification performance, with different origins and niches of application: accuracy, macro-accuracy, area under the ROC curve, the ROC convex hull, the absolute error, and the Brier score (with its decomposition into refinement and calibration). One way of understanding the relation among some of these metrics is the use of variable operating conditions (either in the form of misclassification costs or class proportions). Thus, a metric may correspond to some expected loss over a range of operating conditions. One dimension for the analysis has been precisely the distribution we take for this range of operating conditions, leading to some important connections in the area of proper scoring rules. However, we show that there is another dimension which has not received attention in the analysis of performance metrics. This new dimension is given by the decision rule, which is typically implemented as a threshold choice method when using scoring models. In this paper, we explore many old and new threshold choice methods: fixed, score-uniform, score-driven, rate-driven and optimal, among others. By calculating the loss of these methods for a uniform range of operating conditions we get the 0-1 loss, the absolute error, the Brier score (mean squared error), the AUC and the refinement loss respectively. This provides a comprehensive view of performance metrics as well as a systematic approach to loss minimisation, namely: take a model, apply several threshold choice methods consistent with the information which is (and will be) available about the operating condition, and compare their expected losses. In order to assist in this procedure we also derive several connections between the aforementioned performance metrics, and we highlight the role of calibration in choosing the threshold choice method.
[ { "version": "v1", "created": "Mon, 12 Dec 2011 18:03:42 GMT" }, { "version": "v2", "created": "Sat, 28 Jan 2012 09:44:33 GMT" } ]
1,327,968,000,000
[ [ "Hernández-Orallo", "José", "" ], [ "Flach", "Peter", "" ], [ "Ferri", "Cèsar", "" ] ]
1112.2681
Muhammad Islam
Muhammad Asiful Islam, C. R. Ramakrishnan, I. V. Ramakrishnan
Inference in Probabilistic Logic Programs with Continuous Random Variables
12 pages. arXiv admin note: substantial text overlap with arXiv:1203.4287
Theory and Practice of Logic Programming / Volume12 / Special Issue4-5 / July 2012, pp 505-523
10.1017/S1471068412000154
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic Logic Programming (PLP), exemplified by Sato and Kameya's PRISM, Poole's ICL, Raedt et al's ProbLog and Vennekens et al's LPAD, is aimed at combining statistical and logical knowledge representation and inference. A key characteristic of PLP frameworks is that they are conservative extensions to non-probabilistic logic programs which have been widely used for knowledge representation. PLP frameworks extend traditional logic programming semantics to a distribution semantics, where the semantics of a probabilistic logic program is given in terms of a distribution over possible models of the program. However, the inference techniques used in these works rely on enumerating sets of explanations for a query answer. Consequently, these languages permit very limited use of random variables with continuous distributions. In this paper, we present a symbolic inference procedure that uses constraints and represents sets of explanations without enumeration. This permits us to reason over PLPs with Gaussian or Gamma-distributed random variables (in addition to discrete-valued random variables) and linear equality constraints over reals. We develop the inference procedure in the context of PRISM; however the procedure's core ideas can be easily applied to other PLP languages as well. An interesting aspect of our inference procedure is that PRISM's query evaluation process becomes a special case in the absence of any continuous random variables in the program. The symbolic inference procedure enables us to reason over complex probabilistic models such as Kalman filters and a large subclass of Hybrid Bayesian networks that were hitherto not possible in PLP frameworks. (To appear in Theory and Practice of Logic Programming).
[ { "version": "v1", "created": "Mon, 12 Dec 2011 20:16:55 GMT" }, { "version": "v2", "created": "Fri, 13 Jan 2012 04:28:09 GMT" }, { "version": "v3", "created": "Mon, 8 Oct 2012 03:24:10 GMT" } ]
1,349,740,800,000
[ [ "Islam", "Muhammad Asiful", "" ], [ "Ramakrishnan", "C. R.", "" ], [ "Ramakrishnan", "I. V.", "" ] ]
1112.5381
Daan Fierens
Daan Fierens
Improving the Efficiency of Approximate Inference for Probabilistic Logical Models by means of Program Specialization
17 pages
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the task of performing probabilistic inference with probabilistic logical models. Many algorithms for approximate inference with such models are based on sampling. From a logic programming perspective, sampling boils down to repeatedly calling the same queries on a knowledge base composed of a static part and a dynamic part. The larger the static part, the more redundancy there is in these repeated calls. This is problematic since inefficient sampling yields poor approximations. We show how to apply logic program specialization to make sampling-based inference more efficient. We develop an algorithm that specializes the definitions of the query predicates with respect to the static part of the knowledge base. In experiments on real-world data we obtain speedups of up to an order of magnitude, and these speedups grow with the data-size.
[ { "version": "v1", "created": "Thu, 22 Dec 2011 17:01:34 GMT" } ]
1,426,723,200,000
[ [ "Fierens", "Daan", "" ] ]
1201.0414
Xuechong Guan
Xuechong Guan and Yongming Li
Continuity in Information Algebras
null
null
10.1142/S0218488512500304
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/3.0/
In this paper, the continuity and strong continuity in domain-free information algebras and labeled information algebras are introduced respectively. A more general concept of continuous function which is defined between two domain-free continuous information algebras is presented. It is shown that, with the operations combination and focusing, the set of all continuous functions between two domain-free s-continuous information algebras forms a new s-continuous information algebra. By studying the relationship between domain-free information algebras and labeled information algebras, it is demonstrated that they do correspond to each other on s-compactness.
[ { "version": "v1", "created": "Mon, 2 Jan 2012 02:40:12 GMT" } ]
1,349,654,400,000
[ [ "Guan", "Xuechong", "" ], [ "Li", "Yongming", "" ] ]
1201.0564
Toby Walsh
Ronald de Haan, Nina Narodytska, Toby Walsh
The RegularGcc Matrix Constraint
Submitted to CPAIOR 2012
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study propagation of the RegularGcc global constraint. This ensures that each row of a matrix of decision variables satisfies a Regular constraint, and each column satisfies a Gcc constraint. On the negative side, we prove that propagation is NP-hard even under some strong restrictions (e.g. just 3 values, just 4 states in the automaton, or just 5 columns to the matrix). On the positive side, we identify two cases where propagation is fixed parameter tractable. In addition, we show how to improve propagation over a simple decomposition into separate Regular and Gcc constraints by identifying some necessary but insufficient conditions for a solution. We enforce these conditions with some additional weighted row automata. Experimental results demonstrate the potential of these methods on some standard benchmark problems.
[ { "version": "v1", "created": "Tue, 3 Jan 2012 03:30:18 GMT" } ]
1,480,118,400,000
[ [ "de Haan", "Ronald", "" ], [ "Narodytska", "Nina", "" ], [ "Walsh", "Toby", "" ] ]