id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1311.7139 | Florentin Smarandache | Florentin Smarandache | Introduction to Neutrosophic Measure, Neutrosophic Integral, and
Neutrosophic Probability | 140 pages. 10 figures | Published as a book by Sitech in 2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce for the first time the notions of neutrosophic
measure and neutrosophic integral, and we develop the 1995 notion of
neutrosophic probability. We present many practical examples. It is possible to
define the neutrosophic measure and consequently the neutrosophic integral and
neutrosophic probability in many ways, because there are various types of
indeterminacies, depending on the problem we need to solve. Neutrosophics study
the indeterminacy. Indeterminacy is different from randomness. It can be caused
by physical space materials and type of construction, by items involved in the
space, etc.
| [
{
"version": "v1",
"created": "Wed, 27 Nov 2013 18:56:03 GMT"
}
]
| 1,385,942,400,000 | [
[
"Smarandache",
"Florentin",
""
]
]
|
1312.0144 | Yanjing Wang | Jie Fan, Yanjing Wang, Hans van Ditmarsch | Knowing Whether | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowing whether a proposition is true means knowing that it is true or
knowing that it is false. In this paper, we study logics with a modal operator
Kw for knowing whether but without a modal operator K for knowing that. This
logic is not a normal modal logic, because we do not have Kw (phi -> psi) ->
(Kw phi -> Kw psi). Knowing whether logic cannot define many common frame
properties, and its expressive power less than that of basic modal logic over
classes of models without reflexivity. These features make axiomatizing knowing
whether logics non-trivial. We axiomatize knowing whether logic over various
frame classes. We also present an extension of knowing whether logic with
public announcement operators and we give corresponding reduction axioms for
that. We compare our work in detail to two recent similar proposals.
| [
{
"version": "v1",
"created": "Sat, 30 Nov 2013 19:18:49 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2013 13:19:33 GMT"
},
{
"version": "v3",
"created": "Thu, 12 Dec 2013 10:02:26 GMT"
}
]
| 1,386,892,800,000 | [
[
"Fan",
"Jie",
""
],
[
"Wang",
"Yanjing",
""
],
[
"van Ditmarsch",
"Hans",
""
]
]
|
1312.0735 | Jean-Baptiste Lamy | Jean-Baptiste Lamy (LIM\&BIO), Anis Ellini (LIM\&BIO), Vahid
Ebrahiminia, Jean-Daniel Zucker (UMMISCO), Hector Falcoff (SFTG), Alain Venot
(LIM\&BIO) | Use of the C4.5 machine learning algorithm to test a clinical
guideline-based decision support system | null | Studies in Health Technology and Informatics 136 (2008) 223-8 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Well-designed medical decision support system (DSS) have been shown to
improve health care quality. However, before they can be used in real clinical
situations, these systems must be extensively tested, to ensure that they
conform to the clinical guidelines (CG) on which they are based. Existing
methods cannot be used for the systematic testing of all possible test cases.
We describe here a new exhaustive dynamic verification method. In this method,
the DSS is considered to be a black box, and the Quinlan C4.5 algorithm is used
to build a decision tree from an exhaustive set of DSS input vectors and
outputs. This method was successfully used for the testing of a medical DSS
relating to chronic diseases: the ASTI critiquing module for type 2 diabetes.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2013 08:59:02 GMT"
}
]
| 1,386,115,200,000 | [
[
"Lamy",
"Jean-Baptiste",
"",
"LIM\\&BIO"
],
[
"Ellini",
"Anis",
"",
"LIM\\&BIO"
],
[
"Ebrahiminia",
"Vahid",
"",
"UMMISCO"
],
[
"Zucker",
"Jean-Daniel",
"",
"UMMISCO"
],
[
"Falcoff",
"Hector",
"",
"SFTG"
],
[
"Venot",
"Alain",
"",
"LIM\\&BIO"
]
]
|
1312.0736 | Jean-Baptiste Lamy | Jean-Baptiste Lamy (LIM\&BIO), Vahid Ebrahiminia, Brigitte Seroussi
(LIM\&BIO), Jacques Bouaud, Christian Simon (LIM\&BIO), Madeleine Favre
(SFTG), Hector Falcoff (SFTG), Alain Venot (LIM\&BIO) | A generic system for critiquing physicians' prescriptions: usability,
satisfaction and lessons learnt | null | Studies in Health Technology and Informatics 169 (2011) 125-9 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clinical decision support systems have been developed to help physicians to
take clinical guidelines into account during consultations. The ASTI critiquing
module is one such systems; it provides the physician with automatic criticisms
when a drug prescription does not follow the guidelines. It was initially
developed for hypertension and type 2 diabetes, but is designed to be generic
enough for application to all chronic diseases. We present here the results of
usability and satisfaction evaluations for the ASTI critiquing module, obtained
with GPs for a newly implemented guideline concerning dyslipaemia, and we
discuss the lessons learnt and the difficulties encountered when building a
generic DSS for critiquing physicians' prescriptions.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2013 08:59:47 GMT"
}
]
| 1,386,115,200,000 | [
[
"Lamy",
"Jean-Baptiste",
"",
"LIM\\&BIO"
],
[
"Ebrahiminia",
"Vahid",
"",
"LIM\\&BIO"
],
[
"Seroussi",
"Brigitte",
"",
"LIM\\&BIO"
],
[
"Bouaud",
"Jacques",
"",
"LIM\\&BIO"
],
[
"Simon",
"Christian",
"",
"LIM\\&BIO"
],
[
"Favre",
"Madeleine",
"",
"SFTG"
],
[
"Falcoff",
"Hector",
"",
"SFTG"
],
[
"Venot",
"Alain",
"",
"LIM\\&BIO"
]
]
|
1312.0841 | Ben Ruijl | Ben Ruijl and Jos Vermaseren and Aske Plaat and Jaap van den Herik | Combining Simulated Annealing and Monte Carlo Tree Search for Expression
Simplification | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many applications of computer algebra large expressions must be simplified
to make repeated numerical evaluations tractable. Previous works presented
heuristically guided improvements, e.g., for Horner schemes. The remaining
expression is then further reduced by common subexpression elimination. A
recent approach successfully applied a relatively new algorithm, Monte Carlo
Tree Search (MCTS) with UCT as the selection criterion, to find better variable
orderings. Yet, this approach is fit for further improvements since it is
sensitive to the so-called exploration-exploitation constant $C_p$ and the
number of tree updates $N$. In this paper we propose a new selection criterion
called Simulated Annealing UCT (SA-UCT) that has a dynamic
exploration-exploitation parameter, which decreases with the iteration number
$i$ and thus reduces the importance of exploration over time. First, we provide
an intuitive explanation in terms of the exploration-exploitation behavior of
the algorithm. Then, we test our algorithm on three large expressions of
different origins. We observe that SA-UCT widens the interval of good initial
values $C_p$ where best results are achieved. The improvement is large (more
than a tenfold) and facilitates the selection of an appropriate $C_p$.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2013 14:56:28 GMT"
}
]
| 1,386,115,200,000 | [
[
"Ruijl",
"Ben",
""
],
[
"Vermaseren",
"Jos",
""
],
[
"Plaat",
"Aske",
""
],
[
"Herik",
"Jaap van den",
""
]
]
|
1312.1146 | Anna Roub\'i\v{c}kov\'a | Anna Roub\'i\v{c}kov\'a and Ivan Serina | Case-Based Merging Techniques in OAKPLAN | preliminary version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Case-based planning can take advantage of former problem-solving experiences
by storing in a plan library previously generated plans that can be reused to
solve similar planning problems in the future. Although comparative worst-case
complexity analyses of plan generation and reuse techniques reveal that it is
not possible to achieve provable efficiency gain of reuse over generation, we
show that the case-based planning approach can be an effective alternative to
plan generation when similar reuse candidates can be chosen.
| [
{
"version": "v1",
"created": "Wed, 4 Dec 2013 13:10:45 GMT"
}
]
| 1,386,201,600,000 | [
[
"Roubíčková",
"Anna",
""
],
[
"Serina",
"Ivan",
""
]
]
|
1312.1887 | Julio Lemos | Julio Lemos | Constraints on the search space of argumentation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drawing from research on computational models of argumentation (particularly
the Carneades Argumentation System), we explore the graphical representation of
arguments in a dispute; then, comparing two different traditions on the limits
of the justification of decisions, and devising an intermediate, semi-formal,
model, we also show that it can shed light on the theory of dispute resolution.
We conclude our paper with an observation on the usefulness of highly
constrained reasoning for Online Dispute Resolution systems. Restricting the
search space of arguments exclusively to reasons proposed by the parties
(vetoing the introduction of new arguments by the human or artificial
arbitrator) is the only way to introduce some kind of decidability -- together
with foreseeability -- in the argumentation system.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 15:21:10 GMT"
}
]
| 1,386,547,200,000 | [
[
"Lemos",
"Julio",
""
]
]
|
1312.1971 | Sarwat Nizamani | Sarwat Nizamani, Nasrullah Memon, Uffe Kock Wiil, Panagiotis
Karampelas | Modeling Suspicious Email Detection using Enhanced Feature Selection | null | IJMO 2012 Vol.2(4): 371-377 ISSN: 2010-3697 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The paper presents a suspicious email detection model which incorporates
enhanced feature selection. In the paper we proposed the use of feature
selection strategies along with classification technique for terrorists email
detection. The presented model focuses on the evaluation of machine learning
algorithms such as decision tree (ID3), logistic regression, Na\"ive Bayes
(NB), and Support Vector Machine (SVM) for detecting emails containing
suspicious content. In the literature, various algorithms achieved good
accuracy for the desired task. However, the results achieved by those
algorithms can be further improved by using appropriate feature selection
mechanisms. We have identified the use of a specific feature selection scheme
that improves the performance of the existing algorithms.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2013 19:25:33 GMT"
}
]
| 1,386,547,200,000 | [
[
"Nizamani",
"Sarwat",
""
],
[
"Memon",
"Nasrullah",
""
],
[
"Wiil",
"Uffe Kock",
""
],
[
"Karampelas",
"Panagiotis",
""
]
]
|
1312.2242 | Nikolaos Mavridis | N. Mavridis, S. Konstantopoulos, I. Vetsikas, I. Heldal, P.
Karampiperis, G. Mathiason, S. Thill, K. Stathis, V. Karkaletsis | CLIC: A Framework for Distributed, On-Demand, Human-Machine Cognitive
Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional Artificial Cognitive Systems (for example, intelligent robots)
share a number of limitations. First, they are usually made up only of machine
components; humans are only playing the role of user or supervisor. And yet,
there are tasks in which the current state of the art of AI has much worse
performance or is more expensive than humans: thus, it would be highly
beneficial to have a systematic way of creating systems with both human and
machine components, possibly with remote non-expert humans providing
short-duration real-time services. Second, their components are often dedicated
to only one system, and underutilized for a big part of their lifetime. Third,
there is no inherent support for robust operation, and if a new better
component becomes available, one cannot easily replace the old component.
Fourth, they are viewed as a resource to be developed and owned, not as a
utility. Thus, we are presenting CLIC: a framework for constructing cognitive
systems that overcome the above limitations. The architecture of CLIC provides
specific mechanisms for creating and operating cognitive systems that fulfill a
set of desiderata: First, that are distributed yet situated, interacting with
the physical world though sensing and actuation services, and that are also
combining human as well as machine services. Second, that are made up of
components that are time-shared and re-usable. Third, that provide increased
robustness through self-repair. Fourth, that are constructed and reconstructed
on the fly, with components that dynamically enter and exit the system during
operation, on the basis of availability, pricing, and need. Importantly, fifth,
the cognitive systems created and operated by CLIC do not need to be owned and
can be provided on demand, as a utility; thus transforming human-machine
situated intelligence to a service, and opening up many interesting
opportunities.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2013 18:53:58 GMT"
}
]
| 1,386,633,600,000 | [
[
"Mavridis",
"N.",
""
],
[
"Konstantopoulos",
"S.",
""
],
[
"Vetsikas",
"I.",
""
],
[
"Heldal",
"I.",
""
],
[
"Karampiperis",
"P.",
""
],
[
"Mathiason",
"G.",
""
],
[
"Thill",
"S.",
""
],
[
"Stathis",
"K.",
""
],
[
"Karkaletsis",
"V.",
""
]
]
|
1312.2506 | Daniela Inclezan | Daniela Inclezan | An Application of Answer Set Programming to the Field of Second Language
Acquisition | 17 pages, 3 tables, to appear in Theory and Practice of Logic
Programming (TPLP) | Theory and Practice of Logic Programming 15 (2015) 1-17 | 10.1017/S1471068413000653 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper explores the contributions of Answer Set Programming (ASP) to the
study of an established theory from the field of Second Language Acquisition:
Input Processing. The theory describes default strategies that learners of a
second language use in extracting meaning out of a text, based on their
knowledge of the second language and their background knowledge about the
world. We formalized this theory in ASP, and as a result we were able to
determine opportunities for refining its natural language description, as well
as directions for future theory development. We applied our model to automating
the prediction of how learners of English would interpret sentences containing
the passive voice. We present a system, PIas, that uses these predictions to
assist language instructors in designing teaching materials. To appear in
Theory and Practice of Logic Programming (TPLP).
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2013 16:37:22 GMT"
}
]
| 1,582,070,400,000 | [
[
"Inclezan",
"Daniela",
""
]
]
|
1312.2709 | Anugrah Kumar | Anugrah Kumar, Sanjiban Shekar Roy, Sarvesh SS Rawat, Sanklan Saxena | Phishing Detection by determining reliability factor using rough set
theory | The International Conference on Machine Intelligence Research and
Advancement, ICMIRA-2013 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Phishing is a common online weapon, used against users, by Phishers for
acquiring a confidential information through deception. Since the inception of
internet, nearly everything, ranging from money transaction to sharing
information, is done online in most parts of the world. This has also given
rise to malicious activities such as Phishing. Detecting Phishing is an
intricate process due to complexity, ambiguity and copious amount of
possibilities of factors responsible for phishing . Rough sets can be a
powerful tool, when working on such kind of Applications containing vague or
imprecise data. This paper proposes an approach towards Phishing Detection
Using Rough Set Theory. The Thirteen basic factors, directly responsible
towards Phishing, are grouped into four Strata. Reliability Factor is
determined on the basis of the outcome of these strata, using Rough Set Theory
. Reliability Factor determines the possibility of a suspected site to be Valid
or Fake. Using Rough set Theory most and the least influential factors towards
Phishing are also determined.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 08:10:38 GMT"
}
]
| 1,386,720,000,000 | [
[
"Kumar",
"Anugrah",
""
],
[
"Roy",
"Sanjiban Shekar",
""
],
[
"Rawat",
"Sarvesh SS",
""
],
[
"Saxena",
"Sanklan",
""
]
]
|
1312.2798 | Fennie Liang | Shao Fen Liang, Donia Scott, Robert Stevens and Alan Rector | OntoVerbal: a Generic Tool and Practical Application to SNOMED CT | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology development is a non-trivial task requiring expertise in the chosen
ontological language. We propose a method for making the content of ontologies
more transparent by presenting, through the use of natural language generation,
naturalistic descriptions of ontology classes as textual paragraphs. The method
has been implemented in a proof-of- concept system, OntoVerbal, that
automatically generates paragraph-sized textual descriptions of ontological
classes expressed in OWL. OntoVerbal has been applied to ontologies that can be
loaded into Prot\'eg\'e and been evaluated with SNOMED CT, showing that it
provides coherent, well-structured and accurate textual descriptions of
ontology classes.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2013 13:55:30 GMT"
}
]
| 1,386,720,000,000 | [
[
"Liang",
"Shao Fen",
""
],
[
"Scott",
"Donia",
""
],
[
"Stevens",
"Robert",
""
],
[
"Rector",
"Alan",
""
]
]
|
1312.3825 | Claas Ahlrichs | Claas Ahlrichs and Michael Lawo | Parkinson's Disease Motor Symptoms in Machine Learning: A Review | Health Informatics: An International Journal (HIIJ), November 2013,
Volume 2, Number 4 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reviews related work and state-of-the-art publications for
recognizing motor symptoms of Parkinson's Disease (PD). It presents research
efforts that were undertaken to inform on how well traditional machine learning
algorithms can handle this task. In particular, four PD related motor symptoms
are highlighted (i.e. tremor, bradykinesia, freezing of gait and dyskinesia)
and their details summarized. Thus the primary objective of this research is to
provide a literary foundation for development and improvement of algorithms for
detecting PD related motor symptoms.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 14:58:39 GMT"
}
]
| 1,387,152,000,000 | [
[
"Ahlrichs",
"Claas",
""
],
[
"Lawo",
"Michael",
""
]
]
|
1312.3971 | Tommaso Urli | Tommaso Urli | Balancing bike sharing systems (BBSS): instance generation from the
CitiBike NYC data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bike sharing systems are a very popular means to provide bikes to citizens in
a simple and cheap way. The idea is to install bike stations at various points
in the city, from which a registered user can easily loan a bike by removing it
from a specialized rack. After the ride, the user may return the bike at any
station (if there is a free rack). Services of this kind are mainly public or
semi-public, often aimed at increasing the attractiveness of non-motorized
means of transportation, and are usually free, or almost free, of charge for
the users. Depending on their location, bike stations have specific patterns
regarding when they are empty or full. For instance, in cities where most jobs
are located near the city centre, the commuters cause certain peaks in the
morning: the central bike stations are filled, while the stations in the
outskirts are emptied. Furthermore, stations located on top of a hill are more
likely to be empty, since users are less keen on cycling uphill to return the
bike, and often leave their bike at a more reachable station. These issues
result in substantial user dissatisfaction which may eventually cause the users
to abandon the service. This is why nowadays most bike sharing system providers
take measures to rebalance them. Over the last few years, balancing bike
sharing systems (BBSS) has become increasingly studied in optimization. As
such, generating meaningful instance to serve as a benchmark for the proposed
approaches is an important task. In this technical report we describe the
procedure we used to generate BBSS problem instances from data of the CitiBike
NYC bike sharing system.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2013 22:10:54 GMT"
}
]
| 1,387,238,400,000 | [
[
"Urli",
"Tommaso",
""
]
]
|
1312.4231 | Aiping Huang | Aiping Huang and William Zhu | Dependence space of matroids and its application to attribute reduction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute reduction is a basic issue in knowledge representation and data
mining. Rough sets provide a theoretical foundation for the issue. Matroids
generalized from matrices have been widely used in many fields, particularly
greedy algorithm design, which plays an important role in attribute reduction.
Therefore, it is meaningful to combine matroids with rough sets to solve the
optimization problems. In this paper, we introduce an existing algebraic
structure called dependence space to study the reduction problem in terms of
matroids. First, a dependence space of matroids is constructed. Second, the
characterizations for the space such as consistent sets and reducts are studied
through matroids. Finally, we investigate matroids by the means of the space
and present two expressions for their bases. In a word, this paper provides new
approaches to study attribute reduction.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 02:27:15 GMT"
},
{
"version": "v2",
"created": "Thu, 5 Mar 2015 02:45:33 GMT"
}
]
| 1,425,600,000,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
]
|
1312.4232 | Aiping Huang | Aiping Huang and William Zhu | Geometric lattice structure of covering and its application to attribute
reduction through matroids | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reduction of covering decision systems is an important problem in data
mining, and covering-based rough sets serve as an efficient technique to
process the problem. Geometric lattices have been widely used in many fields,
especially greedy algorithm design which plays an important role in the
reduction problems. Therefore, it is meaningful to combine coverings with
geometric lattices to solve the optimization problems. In this paper, we obtain
geometric lattices from coverings through matroids and then apply them to the
issue of attribute reduction. First, a geometric lattice structure of a
covering is constructed through transversal matroids. Then its atoms are
studied and used to describe the lattice. Second, considering that all the
closed sets of a finite matroid form a geometric lattice, we propose a
dependence space through matroids and study the attribute reduction issues of
the space, which realizes the application of geometric lattices to attribute
reduction. Furthermore, a special type of information system is taken as an
example to illustrate the application. In a word, this work points out an
interesting view, namely, geometric lattice to study the attribute reduction
issues of information systems.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 02:30:07 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jan 2014 11:55:51 GMT"
}
]
| 1,389,052,800,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
]
|
1312.4234 | Aiping Huang | Aiping Huang and William Zhu | Connectedness of graphs and its application to connected matroids
through covering-based rough sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph theoretical ideas are highly utilized by computer science fields
especially data mining. In this field, a data structure can be designed in the
form of tree. Covering is a widely used form of data representation in data
mining and covering-based rough sets provide a systematic approach to this type
of representation. In this paper, we study the connectedness of graphs through
covering-based rough sets and apply it to connected matroids. First, we present
an approach to inducing a covering by a graph, and then study the connectedness
of the graph from the viewpoint of the covering approximation operators.
Second, we construct a graph from a matroid, and find the matroid and the graph
have the same connectedness, which makes us to use covering-based rough sets to
study connected matroids. In summary, this paper provides a new approach to
studying graph theory and matroid theory.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2013 02:32:43 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Jan 2014 12:00:52 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Mar 2015 08:23:03 GMT"
}
]
| 1,425,513,600,000 | [
[
"Huang",
"Aiping",
""
],
[
"Zhu",
"William",
""
]
]
|
1312.4839 | Federico Cerutti | Chatschik Bisdikian, Federico Cerutti, Yuqing Tang, Nir Oren | Reasoning about the Impacts of Information Sharing | Submitted to Information Systems Frontiers Journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we describe a decision process framework allowing an agent to
decide what information it should reveal to its neighbours within a
communication graph in order to maximise its utility. We assume that these
neighbours can pass information onto others within the graph. The inferences
made by agents receiving the messages can have a positive or negative impact on
the information providing agent, and our decision process seeks to identify how
a message should be modified in order to be most beneficial to the information
producer. Our decision process is based on the provider's subjective beliefs
about others in the system, and therefore makes extensive use of the notion of
trust. Our core contributions are therefore the construction of a model of
information propagation; the description of the agent's decision procedure; and
an analysis of some of its properties.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2013 09:30:23 GMT"
}
]
| 1,387,324,800,000 | [
[
"Bisdikian",
"Chatschik",
""
],
[
"Cerutti",
"Federico",
""
],
[
"Tang",
"Yuqing",
""
],
[
"Oren",
"Nir",
""
]
]
|
1312.5097 | Alexander Darer | Alexander Darer and Peter Lewis | A Cellular Automaton Based Controller for a Ms. Pac-Man Agent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video games can be used as an excellent test bed for Artificial Intelligence
(AI) techniques. They are challenging and non-deterministic, this makes it very
difficult to write strong AI players. An example of such a video game is Ms.
Pac-Man. In this paper we will outline some of the previous techniques used to
build AI controllers for Ms. Pac-Man as well as presenting a new and novel
solution. My technique utilises a Cellular Automaton (CA) to build a
representation of the environment at each time step of the game. The basis of
the representation is a 2-D grid of cells. Each cell has a state and a set of
rules which determine whether or not that cell will dominate (i.e. pass its
state value onto) adjacent cells at each update. Once a certain number of
update iterations have been completed, the CA represents the state of the
environment in the game, the goals and the dangers. At this point, Ms. Pac-Man
will decide her next move based only on her adjacent cells, that is to say, she
has no knowledge of the state of the environment as a whole, she will simply
follow the strongest path. This technique shows promise and allows the
controller to achieve high scores in a live game, the behaviour it exhibits is
interesting and complex. Moreover, this behaviour is produced by using very
simple rules which are applied many times to each cell in the grid. Simple
local interactions with complex global results are truly achieved.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2013 11:08:05 GMT"
}
]
| 1,387,411,200,000 | [
[
"Darer",
"Alexander",
""
],
[
"Lewis",
"Peter",
""
]
]
|
1312.5162 | Leon Abdillah | Ardina Ariani, Leon Andretti Abdillah, Firamon Syakti | Sistem pendukung keputusan kelayakan TKI ke luar negeri menggunakan
FMADM | Jurnal Sistem Informasi (SISFO) | Jurnal Sistem Informasi (SISFO), vol. 4, pp. 336-343, September
2013 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BP3TKI Palembang is the government agencies that coordinate, execute and
selection of prospective migrants registration and placement. To simplify the
existing procedures and improve decision-making is necessary to build a
decision support system (DSS) to determine eligibility for employment abroad by
applying Fuzzy Multiple Attribute Decision Making (FMADM), using the linear
sequential systems development methods. The system is built using Microsoft
Visual Basic. Net 2010 and SQL Server 2008 database. The design of the system
using use case diagrams and class diagrams to identify the needs of users and
systems as well as systems implementation guidelines. This Decision Support
System able to rank and produce the prospective migrants, making it easier for
parties to take decision BP3TKI the workers who will be working out of the
country.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2013 16:59:16 GMT"
}
]
| 1,387,411,200,000 | [
[
"Ariani",
"Ardina",
""
],
[
"Abdillah",
"Leon Andretti",
""
],
[
"Syakti",
"Firamon",
""
]
]
|
1312.5378 | Guy Van den Broeck | Guy Van den Broeck, Wannes Meert, Adnan Darwiche | Skolemization for Weighted First-Order Model Counting | To appear in Proceedings of the 14th International Conference on
Principles of Knowledge Representation and Reasoning (KR), Vienna, Austria,
July 2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | First-order model counting emerged recently as a novel reasoning task, at the
core of efficient algorithms for probabilistic logics. We present a
Skolemization algorithm for model counting problems that eliminates existential
quantifiers from a first-order logic theory without changing its weighted model
count. For certain subsets of first-order logic, lifted model counters were
shown to run in time polynomial in the number of objects in the domain of
discourse, where propositional model counters require exponential time.
However, these guarantees apply only to Skolem normal form theories (i.e., no
existential quantifiers) as the presence of existential quantifiers reduces
lifted model counters to propositional ones. Since textbook Skolemization is
not sound for model counting, these restrictions precluded efficient model
counting for directed models, such as probabilistic logic programs, which rely
on existential quantification. Our Skolemization procedure extends the
applicability of first-order model counters to these representations. Moreover,
it simplifies the design of lifted model counting algorithms.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 00:40:56 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Mar 2014 13:50:15 GMT"
}
]
| 1,394,064,000,000 | [
[
"Broeck",
"Guy Van den",
""
],
[
"Meert",
"Wannes",
""
],
[
"Darwiche",
"Adnan",
""
]
]
|
1312.5515 | Marek Kurdej | Marek Kurdej (HEUDIASYC), V\'eronique Cherfaoui (HEUDIASYC) | Conservative, Proportional and Optimistic Contextual Discounting in the
Belief Functions Theory | 7 pages | 16th International Conference on Information Fusion, Istanbul :
Turkey (2013) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information discounting plays an important role in the theory of belief
functions and, generally, in information fusion. Nevertheless, neither
classical uniform discounting nor contextual cannot model certain use cases,
notably temporal discounting. In this article, new contextual discounting
schemes, conservative, proportional and optimistic, are proposed. Some
properties of these discounting operations are examined. Classical discounting
is shown to be a special case of these schemes. Two motivating cases are
discussed: modelling of source reliability and application to temporal
discounting.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 12:40:25 GMT"
}
]
| 1,387,497,600,000 | [
[
"Kurdej",
"Marek",
"",
"HEUDIASYC"
],
[
"Cherfaoui",
"Véronique",
"",
"HEUDIASYC"
]
]
|
1312.5713 | Dimiter Dobrev | Dimiter Dobrev | Giving the AI definition a form suitable for the engineer | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence - what is this? That is the question! In earlier
papers we already gave a formal definition for AI, but if one desires to build
an actual AI implementation, the following issues require attention and are
treated here: the data format to be used, the idea of Undef and Nothing
symbols, various ways for defining the "meaning of life", and finally, a new
notion of "incorrect move". These questions are of minor importance in the
theoretical discussion, but we already know the answer of the question "Does AI
exist?" Now we want to make the next step and to create this program.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 19:28:18 GMT"
},
{
"version": "v2",
"created": "Tue, 31 Mar 2015 10:26:48 GMT"
}
]
| 1,427,846,400,000 | [
[
"Dobrev",
"Dimiter",
""
]
]
|
1312.5714 | Patrick Connor | Patrick C. Connor and Thomas P. Trappenberg | Avoiding Confusion between Predictors and Inhibitors in Value Function
Approximation | 14 pages, 3 figures, 23 references, Workshop paper in ICLR 2014
(updated based on reviewer comments) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In reinforcement learning, the goal is to seek rewards and avoid punishments.
A simple scalar captures the value of a state or of taking an action, where
expected future rewards increase and punishments decrease this quantity.
Naturally an agent should learn to predict this quantity to take beneficial
actions, and many value function approximators exist for this purpose. In the
present work, however, we show how value function approximators can cause
confusion between predictors of an outcome of one valence (e.g., a signal of
reward) and the inhibitor of the opposite valence (e.g., a signal canceling
expectation of punishment). We show this to be a problem for both linear and
non-linear value function approximators, especially when the amount of data (or
experience) is limited. We propose and evaluate a simple resolution: to instead
predict reward and punishment values separately, and rectify and add them to
get the value needed for decision making. We evaluate several function
approximators in this slightly different value function approximation
architecture and show that this approach is able to circumvent the confusion
and thereby achieve lower value-prediction errors.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2013 19:52:52 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Feb 2015 15:35:56 GMT"
}
]
| 1,424,304,000,000 | [
[
"Connor",
"Patrick C.",
""
],
[
"Trappenberg",
"Thomas P.",
""
]
]
|
1312.6096 | Michael Fink | Mario Alviano and Wolfgang Faber | Properties of Answer Set Programming with Convex Generalized Atoms | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, Answer Set Programming (ASP), logic programming under the
stable model or answer set semantics, has seen several extensions by
generalizing the notion of an atom in these programs: be it aggregate atoms,
HEX atoms, generalized quantifiers, or abstract constraints, the idea is to
have more complicated satisfaction patterns in the lattice of Herbrand
interpretations than traditional, simple atoms. In this paper we refer to any
of these constructs as generalized atoms. Several semantics with differing
characteristics have been proposed for these extensions, rendering the big
picture somewhat blurry. In this paper, we analyze the class of programs that
have convex generalized atoms (originally proposed by Liu and Truszczynski in
[10]) in rule bodies and show that for this class many of the proposed
semantics coincide. This is an interesting result, since recently it has been
shown that this class is the precise complexity boundary for the FLP semantics.
We investigate whether similar results also hold for other semantics, and
discuss the implications of our findings.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 20:18:04 GMT"
}
]
| 1,387,756,800,000 | [
[
"Alviano",
"Mario",
""
],
[
"Faber",
"Wolfgang",
""
]
]
|
1312.6105 | Michael Fink | Marcello Balduccini and Yulia Lierler | Hybrid Automated Reasoning Tools: from Black-box to Clear-box
Integration | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, researchers in answer set programming and constraint programming
spent significant efforts in the development of hybrid languages and solving
algorithms combining the strengths of these traditionally separate fields.
These efforts resulted in a new research area: constraint answer set
programming (CASP). CASP languages and systems proved to be largely successful
at providing efficient solutions to problems involving hybrid reasoning tasks,
such as scheduling problems with elements of planning. Yet, the development of
CASP systems is difficult, requiring non-trivial expertise in multiple areas.
This suggests a need for a study identifying general development principles of
hybrid systems. Once these principles and their implications are well
understood, the development of hybrid languages and systems may become a
well-established and well-understood routine process. As a step in this
direction, in this paper we conduct a case study aimed at evaluating various
integration schemas of CASP methods.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 20:44:58 GMT"
}
]
| 1,387,756,800,000 | [
[
"Balduccini",
"Marcello",
""
],
[
"Lierler",
"Yulia",
""
]
]
|
1312.6113 | Michael Fink | Mutsunori Banbara, Martin Gebser, Katsumi Inoue, Torsten Schaub,
Takehide Soh, Naoyuki Tamura and Matthias Weise | Aspartame: Solving Constraint Satisfaction Problems with Answer Set
Programming | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Encoding finite linear CSPs as Boolean formulas and solving them by using
modern SAT solvers has proven to be highly effective, as exemplified by the
award-winning sugar system. We here develop an alternative approach based on
ASP. This allows us to use first-order encodings providing us with a high
degree of flexibility for easy experimentation with different implementations.
The resulting system aspartame re-uses parts of sugar for parsing and
normalizing CSPs. The obtained set of facts is then combined with an ASP
encoding that can be grounded and solved by off-the-shelf ASP systems. We
establish the competitiveness of our approach by empirically contrasting
aspartame and sugar.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 20:57:28 GMT"
}
]
| 1,387,756,800,000 | [
[
"Banbara",
"Mutsunori",
""
],
[
"Gebser",
"Martin",
""
],
[
"Inoue",
"Katsumi",
""
],
[
"Schaub",
"Torsten",
""
],
[
"Soh",
"Takehide",
""
],
[
"Tamura",
"Naoyuki",
""
],
[
"Weise",
"Matthias",
""
]
]
|
1312.6130 | Michael Fink | Michael Bartholomew and Joohyung Lee | A Functional View of Strong Negation in Answer Set Programming | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The distinction between strong negation and default negation has been useful
in answer set programming. We present an alternative account of strong
negation, which lets us view strong negation in terms of the functional stable
model semantics by Bartholomew and Lee. More specifically, we show that, under
complete interpretations, minimizing both positive and negative literals in the
traditional answer set semantics is essentially the same as ensuring the
uniqueness of Boolean function values under the functional stable model
semantics. The same account lets us view Lifschitz's two-valued logic programs
as a special case of the functional stable model semantics. In addition, we
show how non-Boolean intensional functions can be eliminated in favor of
Boolean intensional functions, and furthermore can be represented using strong
negation, which provides a way to compute the functional stable model semantics
using existing ASP solvers. We also note that similar results hold with the
functional stable model semantics by Cabalar.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:01:32 GMT"
}
]
| 1,387,843,200,000 | [
[
"Bartholomew",
"Michael",
""
],
[
"Lee",
"Joohyung",
""
]
]
|
1312.6134 | Michael Fink | Pedro Cabalar and Jorge Fandinno | An Algebra of Causal Chains | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we propose a multi-valued extension of logic programs under the
stable models semantics where each true atom in a model is associated with a
set of justifications, in a similar spirit than a set of proof trees. The main
contribution of this paper is that we capture justifications into an algebra of
truth values with three internal operations: an addition '+' representing
alternative justifications for a formula, a commutative product '*'
representing joint interaction of causes and a non-commutative product '.'
acting as a concatenation or proof constructor. Using this multi-valued
semantics, we obtain a one-to-one correspondence between the syntactic proof
tree of a standard (non-causal) logic program and the interpretation of each
true atom in a model. Furthermore, thanks to this algebraic characterization we
can detect semantic properties like redundancy and relevance of the obtained
justifications. We also identify a lattice-based characterization of this
algebra, defining a direct consequences operator, proving its continuity and
that its least fix point can be computed after a finite number of iterations.
Finally, we define the concept of causal stable model by introducing an
analogous transformation to Gelfond and Lifschitz's program reduct.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:07:19 GMT"
}
]
| 1,387,843,200,000 | [
[
"Cabalar",
"Pedro",
""
],
[
"Fandinno",
"Jorge",
""
]
]
|
1312.6138 | Michael Fink | Vinay K. Chaudhri, Stijn Heymans, Michael Wessel and Tran Cao Son | Query Answering in Object Oriented Knowledge Bases in Logic Programming:
Description and Challenge for ASP | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research on developing efficient and scalable ASP solvers can substantially
benefit by the availability of data sets to experiment with. KB_Bio_101
contains knowledge from a biology textbook, has been developed as part of
Project Halo, and has recently become available for research use. KB_Bio_101 is
one of the largest KBs available in ASP and the reasoning with it is
undecidable in general. We give a description of this KB and ASP programs for a
suite of queries that have been of practical interest. We explain why these
queries pose significant practical challenges for the current ASP solvers.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:13:10 GMT"
}
]
| 1,387,843,200,000 | [
[
"Chaudhri",
"Vinay K.",
""
],
[
"Heymans",
"Stijn",
""
],
[
"Wessel",
"Michael",
""
],
[
"Son",
"Tran Cao",
""
]
]
|
1312.6140 | Michael Fink | Stefan Ellmauthaler and Hannes Strass | The DIAMOND System for Argumentation: Preliminary Report | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Abstract dialectical frameworks (ADFs) are a powerful generalisation of
Dung's abstract argumentation frameworks. In this paper we present an answer
set programming based software system, called DIAMOND (DIAlectical MOdels
eNcoDing). It translates ADFs into answer set programs whose stable models
correspond to models of the ADF with respect to several semantics (i.e.
admissible, complete, stable, grounded).
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:17:03 GMT"
}
]
| 1,387,843,200,000 | [
[
"Ellmauthaler",
"Stefan",
""
],
[
"Strass",
"Hannes",
""
]
]
|
1312.6143 | Michael Fink | Martin Gebser, Philipp Obermeier and Torsten Schaub | A System for Interactive Query Answering with Answer Set Programming | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reactive answer set programming has paved the way for incorporating online
information into operative solving processes. Although this technology was
originally devised for dealing with data streams in dynamic environments, like
assisted living and cognitive robotics, it can likewise be used to incorporate
facts, rules, or queries provided by a user. As a result, we present the design
and implementation of a system for interactive query answering with reactive
answer set programming. Our system quontroller is based on the reactive solver
oclingo and implemented as a dedicated front-end. We describe its functionality
and implementation, and we illustrate its features by some selected use cases.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:24:30 GMT"
}
]
| 1,387,843,200,000 | [
[
"Gebser",
"Martin",
""
],
[
"Obermeier",
"Philipp",
""
],
[
"Schaub",
"Torsten",
""
]
]
|
1312.6146 | Michael Fink | Canan G\"uni\c{c}en, Esra Erdem and H\"usn\"u Yenig\"un | Generating Shortest Synchronizing Sequences using Answer Set Programming | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For a finite state automaton, a synchronizing sequence is an input sequence
that takes all the states to the same state. Checking the existence of a
synchronizing sequence and finding a synchronizing sequence, if one exists, can
be performed in polynomial time. However, the problem of finding a shortest
synchronizing sequence is known to be NP-hard. In this work, the usefulness of
Answer Set Programming to solve this optimization problem is investigated, in
comparison with brute-force algorithms and SAT-based approaches.
Keywords: finite automata, shortest synchronizing sequence, ASP
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:30:10 GMT"
}
]
| 1,387,843,200,000 | [
[
"Güniçen",
"Canan",
""
],
[
"Erdem",
"Esra",
""
],
[
"Yenigün",
"Hüsnü",
""
]
]
|
1312.6151 | Michael Fink | Yuliya Lierler and Miroslaw Truszczynski | Abstract Modular Systems and Solvers | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating diverse formalisms into modular knowledge representation systems
offers increased expressivity, modeling convenience and computational benefits.
We introduce concepts of abstract modules and abstract modular systems to study
general principles behind the design and analysis of model-finding programs, or
solvers, for integrated heterogeneous multi-logic systems. We show how abstract
modules and abstract modular systems give rise to transition systems, which are
a natural and convenient representation of solvers pioneered by the SAT
community. We illustrate our approach by showing how it applies to answer set
programming and propositional logic, and to multi-logic systems based on these
two formalisms.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:37:56 GMT"
}
]
| 1,387,843,200,000 | [
[
"Lierler",
"Yuliya",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
]
|
1312.6156 | Michael Fink | Joost Vennekens | Negation in the Head of CP-logic Rules | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | CP-logic is a probabilistic extension of the logic FO(ID). Unlike ASP, both
of these logics adhere to a Tarskian informal semantics, in which
interpretations represent objective states-of-affairs. In other words, these
logics lack the epistemic component of ASP, in which interpretations represent
the beliefs or knowledge of a rational agent. Consequently, neither CP-logic
nor FO(ID) have the need for two kinds of negations: there is only one
negation, and its meaning is that of objective falsehood. Nevertheless, the
formal semantics of this objective negation is mathematically more similar to
ASP's negation-as-failure than to its classical negation. The reason is that
both CP-logic and FO(ID) have a constructive semantics in which all atoms start
out as false, and may only become true as the result of a rule application.
This paper investigates the possibility of adding the well-known ASP feature of
allowing negation in the head of rules to CP-logic. Because CP-logic only has
one kind of negation, it is of necessity this ''negation-as-failure like''
negation that will be allowed in the head. We investigate the intuitive meaning
of such a construct and the benefits that arise from it.
| [
{
"version": "v1",
"created": "Fri, 20 Dec 2013 21:41:20 GMT"
}
]
| 1,387,843,200,000 | [
[
"Vennekens",
"Joost",
""
]
]
|
1312.6558 | Indre Zliobaite | Indre Zliobaite, Mykola Pechenizkiy | Predictive User Modeling with Actionable Attributes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Different machine learning techniques have been proposed and used for
modeling individual and group user needs, interests and preferences. In the
traditional predictive modeling instances are described by observable
variables, called attributes. The goal is to learn a model for predicting the
target variable for unseen instances. For example, for marketing purposes a
company consider profiling a new user based on her observed web browsing
behavior, referral keywords or other relevant information. In many real world
applications the values of some attributes are not only observable, but can be
actively decided by a decision maker. Furthermore, in some of such applications
the decision maker is interested not only to generate accurate predictions, but
to maximize the probability of the desired outcome. For example, a direct
marketing manager can choose which type of a special offer to send to a client
(actionable attribute), hoping that the right choice will result in a positive
response with a higher probability. We study how to learn to choose the value
of an actionable attribute in order to maximize the probability of a desired
outcome in predictive modeling. We emphasize that not all instances are equally
sensitive to changes in actions. Accurate choice of an action is critical for
those instances, which are on the borderline (e.g. users who do not have a
strong opinion one way or the other). We formulate three supervised learning
approaches for learning to select the value of an actionable attribute at an
instance level. We also introduce a focused training procedure which puts more
emphasis on the situations where varying the action is the most likely to take
the effect. The proof of concept experimental validation on two real-world case
studies in web analytics and e-learning domains highlights the potential of the
proposed approaches.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2013 14:37:44 GMT"
}
]
| 1,387,843,200,000 | [
[
"Zliobaite",
"Indre",
""
],
[
"Pechenizkiy",
"Mykola",
""
]
]
|
1312.6726 | Jordi Grau-Moya | Jordi Grau-Moya and Daniel A. Braun | Bounded Rational Decision-Making in Changing Environments | 9 pages, 2 figures, NIPS 2013 Workshop on Planning with Information
Constraints | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A perfectly rational decision-maker chooses the best action with the highest
utility gain from a set of possible actions. The optimality principles that
describe such decision processes do not take into account the computational
costs of finding the optimal action. Bounded rational decision-making addresses
this problem by specifically trading off information-processing costs and
expected utility. Interestingly, a similar trade-off between energy and entropy
arises when describing changes in thermodynamic systems. This similarity has
been recently used to describe bounded rational agents. Crucially, this
framework assumes that the environment does not change while the decision-maker
is computing the optimal policy. When this requirement is not fulfilled, the
decision-maker will suffer inefficiencies in utility, that arise because the
current policy is optimal for an environment in the past. Here we borrow
concepts from non-equilibrium thermodynamics to quantify these inefficiencies
and illustrate with simulations its relationship with computational resources.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2013 00:22:44 GMT"
}
]
| 1,387,929,600,000 | [
[
"Grau-Moya",
"Jordi",
""
],
[
"Braun",
"Daniel A.",
""
]
]
|
1312.6764 | Eric Nivel | E. Nivel, K. R. Th\'orisson, B. R. Steunebrink, H. Dindo, G. Pezzulo,
M. Rodriguez, C. Hernandez, D. Ognibene, J. Schmidhuber, R. Sanz, H. P.
Helgason, A. Chella and G. K. Jonsson | Bounded Recursive Self-Improvement | null | null | null | RUTR-SCS13006 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have designed a machine that becomes increasingly better at behaving in
underspecified circumstances, in a goal-directed way, on the job, by modeling
itself and its environment as experience accumulates. Based on principles of
autocatalysis, endogeny, and reflectivity, the work provides an architectural
blueprint for constructing systems with high levels of operational autonomy in
underspecified circumstances, starting from a small seed. Through value-driven
dynamic priority scheduling controlling the parallel execution of a vast number
of reasoning threads, the system achieves recursive self-improvement after it
leaves the lab, within the boundaries imposed by its designers. A prototype
system has been implemented and demonstrated to learn a complex real-world
task, real-time multimodal dialogue with humans, by on-line observation. Our
work presents solutions to several challenges that must be solved for achieving
artificial general intelligence.
| [
{
"version": "v1",
"created": "Tue, 24 Dec 2013 06:17:55 GMT"
}
]
| 1,387,929,600,000 | [
[
"Nivel",
"E.",
""
],
[
"Thórisson",
"K. R.",
""
],
[
"Steunebrink",
"B. R.",
""
],
[
"Dindo",
"H.",
""
],
[
"Pezzulo",
"G.",
""
],
[
"Rodriguez",
"M.",
""
],
[
"Hernandez",
"C.",
""
],
[
"Ognibene",
"D.",
""
],
[
"Schmidhuber",
"J.",
""
],
[
"Sanz",
"R.",
""
],
[
"Helgason",
"H. P.",
""
],
[
"Chella",
"A.",
""
],
[
"Jonsson",
"G. K.",
""
]
]
|
1312.6996 | Muhammad Rezaul Karim | Muhammad Rezaul Karim | A New Approach to Constraint Weight Learning for Variable Ordering in
CSPs | null | Proceedings of the IEEE Congress on Evolutionary Computation (CEC
2014), pp. 2716-2723, Beijing, China, July 6-11, 2014 | 10.1109/CEC.2014.6900262 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A Constraint Satisfaction Problem (CSP) is a framework used for modeling and
solving constrained problems. Tree-search algorithms like backtracking try to
construct a solution to a CSP by selecting the variables of the problem one
after another. The order in which these algorithm select the variables
potentially have significant impact on the search performance. Various
heuristics have been proposed for choosing good variable ordering. Many
powerful variable ordering heuristics weigh the constraints first and then
utilize the weights for selecting good order of the variables. Constraint
weighting are basically employed to identify global bottlenecks in a CSP.
In this paper, we propose a new approach for learning weights for the
constraints using competitive coevolutionary Genetic Algorithm (GA). Weights
learned by the coevolutionary GA later help to make better choices for the
first few variables in a search. In the competitive coevolutionary GA,
constraints and candidate solutions for a CSP evolve together through an
inverse fitness interaction process. We have conducted experiments on several
random, quasi-random and patterned instances to measure the efficiency of the
proposed approach. The results and analysis show that the proposed approach is
good at learning weights to distinguish the hard constraints for quasi-random
instances and forced satisfiable random instances generated with the Model RB.
For other type of instances, RNDI still seems to be the best approach as our
experiments show.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2013 18:15:14 GMT"
}
]
| 1,412,553,600,000 | [
[
"Karim",
"Muhammad Rezaul",
""
]
]
|
1312.7326 | Hiqmet Kamberaj Dr. | Hiqmet Kamberaj | Replica Exchange using q-Gaussian Swarm Quantum Particle Intelligence
Method | 10 pages, 5 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a newly developed Replica Exchange algorithm using q -Gaussian
Swarm Quantum Particle Optimization (REX@q-GSQPO) method for solving the
problem of finding the global optimum. The basis of the algorithm is to run
multiple copies of independent swarms at different values of q parameter. Based
on an energy criterion, chosen to satisfy the detailed balance, we are swapping
the particle coordinates of neighboring swarms at regular iteration intervals.
The swarm replicas with high q values are characterized by high diversity of
particles allowing escaping local minima faster, while the low q replicas,
characterized by low diversity of particles, are used to sample more
efficiently the local basins. We compare the new algorithm with the standard
Gaussian Swarm Quantum Particle Optimization (GSQPO) and q-Gaussian Swarm
Quantum Particle Optimization (q-GSQPO) algorithms, and we found that the new
algorithm is more robust in terms of the number of fitness function calls, and
more efficient in terms ability convergence to the global minimum. In
additional, we also provide a method of optimally allocating the swarm replicas
among different q values. Our algorithm is tested for three benchmark
functions, which are known to be multimodal problems, at different
dimensionalities. In addition, we considered a polyalanine peptide of 12
residues modeled using a G\=o coarse-graining potential energy function.
| [
{
"version": "v1",
"created": "Sun, 17 Nov 2013 12:49:15 GMT"
}
]
| 1,388,361,600,000 | [
[
"Kamberaj",
"Hiqmet",
""
]
]
|
1312.7422 | Michael Fink | Michael Fink and Yuliya Lierler | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | Proceedings of Answer Set Programming and Other Computing Paradigms
(ASPOCP 2013), 6th International Workshop, August 25, 2013, Istanbul, Turkey | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This volume contains the papers presented at the sixth workshop on Answer Set
Programming and Other Computing Paradigms (ASPOCP 2013) held on August 25th,
2013 in Istanbul, co-located with the 29th International Conference on Logic
Programming (ICLP 2013). It thus continues a series of previous events
co-located with ICLP, aiming at facilitating the discussion about crossing the
boundaries of current ASP techniques in theory, solving, and applications, in
combination with or inspired by other computing paradigms.
| [
{
"version": "v1",
"created": "Sat, 28 Dec 2013 11:08:35 GMT"
}
]
| 1,388,448,000,000 | [
[
"Fink",
"Michael",
""
],
[
"Lierler",
"Yuliya",
""
]
]
|
1312.7740 | Reza Mortezapour | Reza Mortezapour, Mehdi Afzali | Assessment of Customer Credit through Combined Clustering of Artificial
Neural Networks, Genetics Algorithm and Bayesian Probabilities | 5 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, with respect to the increasing growth of demand to get credit from the
customers of banks and finance and credit institutions, using an effective and
efficient method to decrease the risk of non-repayment of credit given is very
necessary. Assessment of customers' credit is one of the most important and the
most essential duties of banks and institutions, and if an error occurs in this
field, it would leads to the great losses for banks and institutions. Thus,
using the predicting computer systems has been significantly progressed in
recent decades. The data that are provided to the credit institutions' managers
help them to make a straight decision for giving the credit or not-giving it.
In this paper, we will assess the customer credit through a combined
classification using artificial neural networks, genetics algorithm and
Bayesian probabilities simultaneously, and the results obtained from three
methods mentioned above would be used to achieve an appropriate and final
result. We use the K_folds cross validation test in order to assess the method
and finally, we compare the proposed method with the methods such as
Clustering-Launched Classification (CLC), Support Vector Machine (SVM) as well
as GA+SVM where the genetics algorithm has been used to improve them.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2013 15:31:25 GMT"
}
]
| 1,388,448,000,000 | [
[
"Mortezapour",
"Reza",
""
],
[
"Afzali",
"Mehdi",
""
]
]
|
1401.0245 | Sujit Gath | S.J Gath and R.V Kulkarni | A Review: Expert System for Diagnosis of Myocardial Infarction | 7 pages. arXiv admin note: text overlap with arXiv:1006.4544 by other
authors | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A computer Program Capable of performing at a human-expert level in a narrow
problem domain area is called an expert system. Management of uncertainty is an
intrinsically important issue in the design of expert systems because much of
the information in the knowledge base of a typical expert system is imprecise,
incomplete or not totally reliable. In this paper, the author present s the
review of past work that has been carried out by various researchers based on
development of expert systems for the diagnosis of cardiac disease
| [
{
"version": "v1",
"created": "Wed, 1 Jan 2014 03:59:22 GMT"
}
]
| 1,388,707,200,000 | [
[
"Gath",
"S. J",
""
],
[
"Kulkarni",
"R. V",
""
]
]
|
1401.1247 | Guy Van den Broeck | Mathias Niepert and Guy Van den Broeck | Tractability through Exchangeability: A New Perspective on Efficient
Probabilistic Inference | In Proceedings of the 28th AAAI Conference on Artificial Intelligence | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exchangeability is a central notion in statistics and probability theory. The
assumption that an infinite sequence of data points is exchangeable is at the
core of Bayesian statistics. However, finite exchangeability as a statistical
property that renders probabilistic inference tractable is less
well-understood. We develop a theory of finite exchangeability and its relation
to tractable probabilistic inference. The theory is complementary to that of
independence and conditional independence. We show that tractable inference in
probabilistic models with high treewidth and millions of variables can be
understood using the notion of finite (partial) exchangeability. We also show
that existing lifted inference algorithms implicitly utilize a combination of
conditional independence and partial exchangeability.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2014 00:30:25 GMT"
},
{
"version": "v2",
"created": "Tue, 22 Apr 2014 22:21:16 GMT"
}
]
| 1,398,297,600,000 | [
[
"Niepert",
"Mathias",
""
],
[
"Broeck",
"Guy Van den",
""
]
]
|
1401.1533 | Devis Pantano | Devis Pantano | Proposta di nuovi strumenti per comprendere come funziona la cognizione
(Novel tools to understand how cognition works) | In Italian | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I think that the main reason why we do not understand the general principles
of how knowledge works (and probably also the reason why we have not yet
designed and built efficient machines capable of artificial intelligence), is
not the excessive complexity of cognitive phenomena, but the lack of the
conceptual and methodological tools to properly address the problem. It is like
trying to build up Physics without the concept of number, or to understand the
origin of species without including the mechanism of natural selection. In this
paper I propose some new conceptual and methodological tools, which seem to
offer a real opportunity to understand the logic of cognitive processes. I
propose a new method to properly treat the concepts of structure and schema,
and to perform on them operations of structural analysis. These operations
allow to move straightforwardly from concrete to more abstract representations.
With these tools I will suggest a definition for the concept of rule, of
regularity and of emergent phenomena. From the analysis of some important
aspects of the rules, I suggest to distinguish them in operational and
associative rules. I propose that associative rules assume a dominant role in
cognition. I also propose a definition for the concept of problem. At the end I
will briefly illustrate a possible general model for cognitive systems.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2014 22:38:18 GMT"
},
{
"version": "v2",
"created": "Mon, 27 Jan 2014 22:26:33 GMT"
},
{
"version": "v3",
"created": "Fri, 18 Apr 2014 19:39:37 GMT"
}
]
| 1,398,038,400,000 | [
[
"Pantano",
"Devis",
""
]
]
|
1401.1669 | J. G. Wolff | J. Gerard Wolff | Smart machines and the SP theory of intelligence | arXiv admin note: substantial text overlap with arXiv:1306.3890 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | These notes describe how the "SP theory of intelligence", and its embodiment
in the "SP machine", may help to realise cognitive computing, as described in
the book "Smart Machines". In the SP system, information compression and a
concept of "multiple alignment" are centre stage. The system is designed to
integrate such things as unsupervised learning, pattern recognition,
probabilistic reasoning, and more. It may help to overcome the problem of
variety in big data, it may serve in pattern recognition and in the
unsupervised learning of structure in data, and it may facilitate the
management and transmission of big data. There is potential, via information
compression, for substantial gains in computational efficiency, especially in
the use of energy. The SP system may help to realise data-centric computing,
perhaps via a development of Hebb's concept of a "cell assembly", or via the
use of light or DNA for the processing of information. It has potential in the
management of errors and uncertainty in data, in medical diagnosis, in
processing streams of data, and in promoting adaptability in robots.
| [
{
"version": "v1",
"created": "Wed, 8 Jan 2014 11:32:56 GMT"
}
]
| 1,392,854,400,000 | [
[
"Wolff",
"J. Gerard",
""
]
]
|
1401.2153 | V Karthikeyan VKK | V.Karthikeyan, V.J.Vijayalakshmi and P.Jeyakumar | Ontology - Based Dynamic Business Process Customization | This paper has been withdrawn by the author due to a crucial sign
error | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interaction between business models is used in consumer centric manner
instead of using a producer centric approach for customizing the business
process in cloud environment. The knowledge based human semantic web is used
for customizing the business process It introduces the Human Semantic Web as a
conceptual interface, providing human-understandable semantics on top of the
ordinary Semantic Web, which provides machine-readable semantics based on RDF
in this mismatching is a major problem. To overcome this following technique
automatic customization detection is an automated process of detecting possible
elements or variables of a business process that needto be especially treated
in order to suit the requirement of the other process. To the business
processto be customized as the primary business process and those that it
collaborates with as secondary business process or SBP Automatic customization
enactment is an automated process of taking actions to perform the
customization on the PBP according to the detected customization spots and the
automatic reasoning on the customization conceptualization knowledge framework.
The process of customizing businessprocesses by composite the web pages by
using web service.
| [
{
"version": "v1",
"created": "Thu, 9 Jan 2014 08:23:27 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jan 2014 08:21:00 GMT"
},
{
"version": "v3",
"created": "Thu, 23 Jan 2014 03:39:52 GMT"
}
]
| 1,390,521,600,000 | [
[
"Karthikeyan",
"V.",
""
],
[
"Vijayalakshmi",
"V. J.",
""
],
[
"Jeyakumar",
"P.",
""
]
]
|
1401.2474 | Barry Hurley | Barry Hurley, Serdar Kadioglu, Yuri Malitsky, Barry O'Sullivan | Transformation-based Feature Computation for Algorithm Portfolios | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instance-specific algorithm configuration and algorithm portfolios have been
shown to offer significant improvements over single algorithm approaches in a
variety of application domains. In the SAT and CSP domains algorithm portfolios
have consistently dominated the main competitions in these fields for the past
five years. For a portfolio approach to be effective there are two crucial
conditions that must be met. First, there needs to be a collection of
complementary solvers with which to make a portfolio. Second, there must be a
collection of problem features that can accurately identify structural
differences between instances. This paper focuses on the latter issue: feature
representation, because, unlike SAT, not every problem has well-studied
features. We employ the well-known SATzilla feature set, but compute
alternative sets on different SAT encodings of CSPs. We show that regardless of
what encoding is used to convert the instances, adequate structural information
is maintained to differentiate between problem instances, and that this can be
exploited to make an effective portfolio-based CSP solver.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2014 22:05:39 GMT"
}
]
| 1,389,657,600,000 | [
[
"Hurley",
"Barry",
""
],
[
"Kadioglu",
"Serdar",
""
],
[
"Malitsky",
"Yuri",
""
],
[
"O'Sullivan",
"Barry",
""
]
]
|
1401.2483 | Andino Maseleno | Andino Maseleno and Md. Mahmud Hasan | Dempster-Shafer Theory for Move Prediction in Start Kicking of The
Bicycle Kick of Sepak Takraw Game | Middle-East Journal of Scientific Research, Vol. 16, No. 7, 2013.
ISSN 1990-9233, pp. 896 - 903 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper presents Dempster-Shafer theory for move prediction in start
kicking of the bicycle kick of sepak takraw game. Sepak takraw is a highly
complex net-barrier kicking sport that involves dazzling displays of quick
reflexes, acrobatic twists, turns and swerves of the agile human body movement.
A Bicycle kick or Scissor kick is a physical move made by throwing the body up
into the air, making a shearing movement with the legs to get one leg in front
of the other without holding on to the ground. Specifically, this paper
considers bicycle kick of sepak takraw game in start kicking of the ball with
uncertainty where player has different awareness regarding the contingencies.
We have chosen Dempster-Shafer theory because the advantages of the
Dempster-Shafer theory which include the ability to model information in a
flexible way without requiring a probability to be assigned to each element in
a set, providing a convenient and simple mechanism for combining two or more
pieces of evidence under certain conditions, it can model ignorance explicitly,
rejection of the law of additivity for belief in disjoint propositions.
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2014 23:48:40 GMT"
}
]
| 1,389,657,600,000 | [
[
"Maseleno",
"Andino",
""
],
[
"Hasan",
"Md. Mahmud",
""
]
]
|
1401.3428 | Nicolas Meuleau | Nicolas Meuleau, Emmanuel Benazera, Ronen I. Brafman, Eric A. Hansen,
Mausam | A Heuristic Search Approach to Planning with Continuous Resources in
Stochastic Domains | null | Journal Of Artificial Intelligence Research, Volume 34, pages
27-59, 2009 | 10.1613/jair.2529 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of optimal planning in stochastic domains with
resource constraints, where the resources are continuous and the choice of
action at each step depends on resource availability. We introduce the HAO*
algorithm, a generalization of the AO* algorithm that performs search in a
hybrid state space that is modeled using both discrete and continuous state
variables, where the continuous variables represent monotonic resources. Like
other heuristic search algorithms, HAO* leverages knowledge of the start state
and an admissible heuristic to focus computational effort on those parts of the
state space that could be reached from the start state by following an optimal
policy. We show that this approach is especially effective when resource
constraints limit how much of the state space is reachable. Experimental
results demonstrate its effectiveness in the domain that motivates our
research: automated planning for planetary exploration rovers.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:46:00 GMT"
}
]
| 1,389,830,400,000 | [
[
"Meuleau",
"Nicolas",
""
],
[
"Benazera",
"Emmanuel",
""
],
[
"Brafman",
"Ronen I.",
""
],
[
"Hansen",
"Eric A.",
""
],
[
"Mausam",
"",
""
]
]
|
1401.3431 | James Delgrande | James Delgrande, Yi Jin, Francis Jeffry Pelletier | Compositional Belief Update | null | Journal Of Artificial Intelligence Research, Volume 32, pages
757-791, 2008 | 10.1613/jair.2539 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we explore a class of belief update operators, in which the
definition of the operator is compositional with respect to the sentence to be
added. The goal is to provide an update operator that is intuitive, in that its
definition is based on a recursive decomposition of the update sentences
structure, and that may be reasonably implemented. In addressing update, we
first provide a definition phrased in terms of the models of a knowledge base.
While this operator satisfies a core group of the benchmark Katsuno-Mendelzon
update postulates, not all of the postulates are satisfied. Other
Katsuno-Mendelzon postulates can be obtained by suitably restricting the
syntactic form of the sentence for update, as we show. In restricting the
syntactic form of the sentence for update, we also obtain a hierarchy of update
operators with Winsletts standard semantics as the most basic interesting
approach captured. We subsequently give an algorithm which captures this
approach; in the general case the algorithm is exponential, but with some
not-unreasonable assumptions we obtain an algorithm that is linear in the size
of the knowledge base. Hence the resulting approach has much better complexity
characteristics than other operators in some situations. We also explore other
compositional belief change operators: erasure is developed as a dual operator
to update; we show that a forget operator is definable in terms of update; and
we give a definition of the compositional revision operator. We obtain that
compositional revision, under the most natural definition, yields the Satoh
revision operator.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:48:21 GMT"
}
]
| 1,389,830,400,000 | [
[
"Delgrande",
"James",
""
],
[
"Jin",
"Yi",
""
],
[
"Pelletier",
"Francis Jeffry",
""
]
]
|
1401.3436 | St\'ephane Ross | St\'ephane Ross, Joelle Pineau, S\'ebastien Paquet, Brahim Chaib-draa | Online Planning Algorithms for POMDPs | null | Journal Of Artificial Intelligence Research, Volume 32, pages
663-704, 2008 | 10.1613/jair.2567 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially Observable Markov Decision Processes (POMDPs) provide a rich
framework for sequential decision-making under uncertainty in stochastic
domains. However, solving a POMDP is often intractable except for small
problems due to their complexity. Here, we focus on online approaches that
alleviate the computational complexity by computing good local policies at each
decision step during the execution. Online algorithms generally consist of a
lookahead search to find the best action to execute at each time step in an
environment. Our objectives here are to survey the various existing online
POMDP methods, analyze their properties and discuss their advantages and
disadvantages; and to thoroughly evaluate these online approaches in different
environments under various metrics (return, error bound reduction, lower bound
improvement). Our experimental results indicate that state-of-the-art online
heuristic search methods can handle large POMDP domains efficiently.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:52:25 GMT"
}
]
| 1,389,830,400,000 | [
[
"Ross",
"Stéphane",
""
],
[
"Pineau",
"Joelle",
""
],
[
"Paquet",
"Sébastien",
""
],
[
"Chaib-draa",
"Brahim",
""
]
]
|
1401.3437 | Eyal Amir | Eyal Amir, Allen Chang | Learning Partially Observable Deterministic Action Models | null | Journal Of Artificial Intelligence Research, Volume 33, pages
349-402, 2008 | 10.1613/jair.2575 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present exact algorithms for identifying deterministic-actions effects and
preconditions in dynamic partially observable domains. They apply when one does
not know the action model(the way actions affect the world) of a domain and
must learn it from partial observations over time. Such scenarios are common in
real world applications. They are challenging for AI tasks because traditional
domain structures that underly tractability (e.g., conditional independence)
fail there (e.g., world features become correlated). Our work departs from
traditional assumptions about partial observations and action models. In
particular, it focuses on problems in which actions are deterministic of simple
logical structure and observation models have all features observed with some
frequency. We yield tractable algorithms for the modified problem for such
domains.
Our algorithms take sequences of partial observations over time as input, and
output deterministic action models that could have lead to those observations.
The algorithms output all or one of those models (depending on our choice), and
are exact in that no model is misclassified given the observations. Our
algorithms take polynomial time in the number of time steps and state features
for some traditional action classes examined in the AI-planning literature,
e.g., STRIPS actions. In contrast, traditional approaches for HMMs and
Reinforcement Learning are inexact and exponentially intractable for such
domains. Our experiments verify the theoretical tractability guarantees, and
show that we identify action models exactly. Several applications in planning,
autonomous exploration, and adventure-game playing already use these results.
They are also promising for probabilistic settings, partially observable
reinforcement learning, and diagnosis.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:52:56 GMT"
}
]
| 1,389,830,400,000 | [
[
"Amir",
"Eyal",
""
],
[
"Chang",
"Allen",
""
]
]
|
1401.3438 | Neil C.A. Moore | Neil C.A. Moore, Patrick Prosser | The Ultrametric Constraint and its Application to Phylogenetics | null | Journal Of Artificial Intelligence Research, Volume 32, pages
901-938, 2008 | 10.1613/jair.2580 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A phylogenetic tree shows the evolutionary relationships among species.
Internal nodes of the tree represent speciation events and leaf nodes
correspond to species. A goal of phylogenetics is to combine such trees into
larger trees, called supertrees, whilst respecting the relationships in the
original trees. A rooted tree exhibits an ultrametric property; that is, for
any three leaves of the tree it must be that one pair has a deeper most recent
common ancestor than the other pairs, or that all three have the same most
recent common ancestor. This inspires a constraint programming encoding for
rooted trees. We present an efficient constraint that enforces the ultrametric
property over a symmetric array of constrained integer variables, with the
inevitable property that the lower bounds of any three variables are mutually
supportive. We show that this allows an efficient constraint-based solution to
the supertree construction problem. We demonstrate that the versatility of
constraint programming can be exploited to allow solutions to variants of the
supertree construction problem.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:53:22 GMT"
}
]
| 1,389,830,400,000 | [
[
"Moore",
"Neil C. A.",
""
],
[
"Prosser",
"Patrick",
""
]
]
|
1401.3439 | Sonia Chernova | Sonia Chernova, Manuela Veloso | Interactive Policy Learning through Confidence-Based Autonomy | null | Journal Of Artificial Intelligence Research, Volume 34, pages
1-25, 2009 | 10.1613/jair.2584 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present Confidence-Based Autonomy (CBA), an interactive algorithm for
policy learning from demonstration. The CBA algorithm consists of two
components which take advantage of the complimentary abilities of humans and
computer agents. The first component, Confident Execution, enables the agent to
identify states in which demonstration is required, to request a demonstration
from the human teacher and to learn a policy based on the acquired data. The
algorithm selects demonstrations based on a measure of action selection
confidence, and our results show that using Confident Execution the agent
requires fewer demonstrations to learn the policy than when demonstrations are
selected by a human teacher. The second algorithmic component, Corrective
Demonstration, enables the teacher to correct any mistakes made by the agent
through additional demonstrations in order to improve the policy and future
task performance. CBA and its individual components are compared and evaluated
in a complex simulated driving domain. The complete CBA algorithm results in
the best overall learning performance, successfully reproducing the behavior of
the teacher while balancing the tradeoff between number of demonstrations and
number of incorrect actions during learning.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:53:48 GMT"
}
]
| 1,389,830,400,000 | [
[
"Chernova",
"Sonia",
""
],
[
"Veloso",
"Manuela",
""
]
]
|
1401.3442 | Amir Gershman | Amir Gershman, Amnon Meisels, Roie Zivan | Asynchronous Forward Bounding for Distributed COPs | null | Journal Of Artificial Intelligence Research, Volume 34, pages
61-88, 2009 | 10.1613/jair.2591 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new search algorithm for solving distributed constraint optimization
problems (DisCOPs) is presented. Agents assign variables sequentially and
compute bounds on partial assignments asynchronously. The asynchronous bounds
computation is based on the propagation of partial assignments. The
asynchronous forward-bounding algorithm (AFB) is a distributed optimization
search algorithm that keeps one consistent partial assignment at all times. The
algorithm is described in detail and its correctness proven. Experimental
evaluation shows that AFB outperforms synchronous branch and bound by many
orders of magnitude, and produces a phase transition as the tightness of the
problem increases. This is an analogous effect to the phase transition that has
been observed when local consistency maintenance is applied to MaxCSPs. The AFB
algorithm is further enhanced by the addition of a backjumping mechanism,
resulting in the AFB-BJ algorithm. Distributed backjumping is based on
accumulated information on bounds of all values and on processing concurrently
a queue of candidate goals for the next move back. The AFB-BJ algorithm is
compared experimentally to other DisCOP algorithms (ADOPT, DPOP, OptAPO) and is
shown to be a very efficient algorithm for DisCOPs.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:54:38 GMT"
}
]
| 1,389,830,400,000 | [
[
"Gershman",
"Amir",
""
],
[
"Meisels",
"Amnon",
""
],
[
"Zivan",
"Roie",
""
]
]
|
1401.3443 | Antonis Kakas | Antonis Kakas, Paolo Mancarella, Fariba Sadri, Kostas Stathis,
Francesca Toni | Computational Logic Foundations of KGP Agents | null | Journal Of Artificial Intelligence Research, Volume 33, pages
285-348, 2008 | 10.1613/jair.2596 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the computational logic foundations of a model of agency
called the KGP (Knowledge, Goals and Plan model. This model allows the
specification of heterogeneous agents that can interact with each other, and
can exhibit both proactive and reactive behaviour allowing them to function in
dynamic environments by adjusting their goals and plans when changes happen in
such environments. KGP provides a highly modular agent architecture that
integrates a collection of reasoning and physical capabilities, synthesised
within transitions that update the agents state in response to reasoning,
sensing and acting. Transitions are orchestrated by cycle theories that specify
the order in which transitions are executed while taking into account the
dynamic context and agent preferences, as well as selection operators for
providing inputs to transitions.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:54:59 GMT"
}
]
| 1,389,830,400,000 | [
[
"Kakas",
"Antonis",
""
],
[
"Mancarella",
"Paolo",
""
],
[
"Sadri",
"Fariba",
""
],
[
"Stathis",
"Kostas",
""
],
[
"Toni",
"Francesca",
""
]
]
|
1401.3444 | Didier Dubois | Didier Dubois, H\'el\`ene Fargier, Jean-Fran\c{c}ois Bonnefon | On the Qualitative Comparison of Decisions Having Positive and Negative
Features | null | Journal Of Artificial Intelligence Research, Volume 32, pages
385-417, 2008 | 10.1613/jair.2520 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Making a decision is often a matter of listing and comparing positive and
negative arguments. In such cases, the evaluation scale for decisions should be
considered bipolar, that is, negative and positive values should be explicitly
distinguished. That is what is done, for example, in Cumulative Prospect
Theory. However, contraryto the latter framework that presupposes genuine
numerical assessments, human agents often decide on the basis of an ordinal
ranking of the pros and the cons, and by focusing on the most salient
arguments. In other terms, the decision process is qualitative as well as
bipolar. In this article, based on a bipolar extension of possibility theory,
we define and axiomatically characterize several decision rules tailored for
the joint handling of positive and negative arguments in an ordinal setting.
The simplest rules can be viewed as extensions of the maximin and maximax
criteria to the bipolar case, and consequently suffer from poor decisive power.
More decisive rules that refine the former are also proposed. These refinements
agree both with principles of efficiency and with the spirit of
order-of-magnitude reasoning, that prevails in qualitative decision theory. The
most refined decision rule uses leximin rankings of the pros and the cons, and
the ideas of counting arguments of equal strength and cancelling pros by cons.
It is shown to come down to a special case of Cumulative Prospect Theory, and
to subsume the Take the Best heuristic studied by cognitive psychologists.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 04:55:20 GMT"
}
]
| 1,389,830,400,000 | [
[
"Dubois",
"Didier",
""
],
[
"Fargier",
"Hélène",
""
],
[
"Bonnefon",
"Jean-François",
""
]
]
|
1401.3448 | Robert Mateescu | Robert Mateescu, Rina Dechter, Radu Marinescu | AND/OR Multi-Valued Decision Diagrams (AOMDDs) for Graphical Models | null | Journal Of Artificial Intelligence Research, Volume 33, pages
465-519, 2008 | 10.1613/jair.2605 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by the recently introduced framework of AND/OR search spaces for
graphical models, we propose to augment Multi-Valued Decision Diagrams (MDD)
with AND nodes, in order to capture function decomposition structure and to
extend these compiled data structures to general weighted graphical models
(e.g., probabilistic models). We present the AND/OR Multi-Valued Decision
Diagram (AOMDD) which compiles a graphical model into a canonical form that
supports polynomial (e.g., solution counting, belief updating) or constant time
(e.g. equivalence of graphical models) queries. We provide two algorithms for
compiling the AOMDD of a graphical model. The first is search-based, and works
by applying reduction rules to the trace of the memory intensive AND/OR search
algorithm. The second is inference-based and uses a Bucket Elimination schedule
to combine the AOMDDs of the input functions via the the APPLY operator. For
both algorithms, the compilation time and the size of the AOMDD are, in the
worst case, exponential in the treewidth of the graphical model, rather than
pathwidth as is known for ordered binary decision diagrams (OBDDs). We
introduce the concept of semantic treewidth, which helps explain why the size
of a decision diagram is often much smaller than the worst case bound. We
provide an experimental evaluation that demonstrates the potential of AOMDDs.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:09:35 GMT"
}
]
| 1,389,830,400,000 | [
[
"Mateescu",
"Robert",
""
],
[
"Dechter",
"Rina",
""
],
[
"Marinescu",
"Radu",
""
]
]
|
1401.3450 | Tal Grinshpoun | Tal Grinshpoun, Amnon Meisels | Completeness and Performance Of The APO Algorithm | arXiv admin note: substantial text overlap with arXiv:1109.6052 by
other authors | Journal Of Artificial Intelligence Research, Volume 33, pages
223-258, 2008 | 10.1613/jair.2611 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Asynchronous Partial Overlay (APO) is a search algorithm that uses
cooperative mediation to solve Distributed Constraint Satisfaction Problems
(DisCSPs). The algorithm partitions the search into different subproblems of
the DisCSP. The original proof of completeness of the APO algorithm is based on
the growth of the size of the subproblems. The present paper demonstrates that
this expected growth of subproblems does not occur in some situations, leading
to a termination problem of the algorithm. The problematic parts in the APO
algorithm that interfere with its completeness are identified and necessary
modifications to the algorithm that fix these problematic parts are given. The
resulting version of the algorithm, Complete Asynchronous Partial Overlay
(CompAPO), ensures its completeness. Formal proofs for the soundness and
completeness of CompAPO are given. A detailed performance evaluation of CompAPO
comparing it to other DisCSP algorithms is presented, along with an extensive
experimental evaluation of the algorithm's unique behavior. Additionally, an
optimization version of the algorithm, CompOptAPO, is presented, discussed, and
evaluated.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:10:44 GMT"
}
]
| 1,389,830,400,000 | [
[
"Grinshpoun",
"Tal",
""
],
[
"Meisels",
"Amnon",
""
]
]
|
1401.3453 | Judy Goldsmith | Judy Goldsmith, Jerome Lang, Miroslaw Truszczyski, Nic Wilson | The Computational Complexity of Dominance and Consistency in CP-Nets | null | Journal Of Artificial Intelligence Research, Volume 33, pages
403-432, 2008 | 10.1613/jair.2627 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the computational complexity of testing dominance and
consistency in CP-nets. Previously, the complexity of dominance has been
determined for restricted classes in which the dependency graph of the CP-net
is acyclic. However, there are preferences of interest that define cyclic
dependency graphs; these are modeled with general CP-nets. In our main results,
we show here that both dominance and consistency for general CP-nets are
PSPACE-complete. We then consider the concept of strong dominance, dominance
equivalence and dominance incomparability, and several notions of optimality,
and identify the complexity of the corresponding decision problems. The
reductions used in the proofs are from STRIPS planning, and thus reinforce the
earlier established connections between both areas.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:13:25 GMT"
}
]
| 1,389,830,400,000 | [
[
"Goldsmith",
"Judy",
""
],
[
"Lang",
"Jerome",
""
],
[
"Truszczyski",
"Miroslaw",
""
],
[
"Wilson",
"Nic",
""
]
]
|
1401.3455 | Prashant Doshi | Prashant Doshi, Piotr J. Gmytrasiewicz | Monte Carlo Sampling Methods for Approximating Interactive POMDPs | null | Journal Of Artificial Intelligence Research, Volume 34, pages
297-337, 2009 | 10.1613/jair.2630 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially observable Markov decision processes (POMDPs) provide a principled
framework for sequential planning in uncertain single agent settings. An
extension of POMDPs to multiagent settings, called interactive POMDPs
(I-POMDPs), replaces POMDP belief spaces with interactive hierarchical belief
systems which represent an agent's belief about the physical world, about
beliefs of other agents, and about their beliefs about others' beliefs. This
modification makes the difficulties of obtaining solutions due to complexity of
the belief and policy spaces even more acute. We describe a general method for
obtaining approximate solutions of I-POMDPs based on particle filtering (PF).
We introduce the interactive PF, which descends the levels of the interactive
belief hierarchies and samples and propagates beliefs at each level. The
interactive PF is able to mitigate the belief space complexity, but it does not
address the policy space complexity. To mitigate the policy space complexity --
sometimes also called the curse of history -- we utilize a complementary method
based on sampling likely observations while building the look ahead
reachability tree. While this approach does not completely address the curse of
history, it beats back the curse's impact substantially. We provide
experimental results and chart future work.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:14:08 GMT"
}
]
| 1,389,830,400,000 | [
[
"Doshi",
"Prashant",
""
],
[
"Gmytrasiewicz",
"Piotr J.",
""
]
]
|
1401.3458 | Fahiem Bacchus | Fahiem Bacchus, Shannon Dalmao, Toniann Pitassi | Solving #SAT and Bayesian Inference with Backtracking Search | null | Journal Of Artificial Intelligence Research, Volume 34, pages
391-442, 2009 | 10.1613/jair.2648 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inference in Bayes Nets (BAYES) is an important problem with numerous
applications in probabilistic reasoning. Counting the number of satisfying
assignments of a propositional formula (#SAT) is a closely related problem of
fundamental theoretical importance. Both these problems, and others, are
members of the class of sum-of-products (SUMPROD) problems. In this paper we
show that standard backtracking search when augmented with a simple memoization
scheme (caching) can solve any sum-of-products problem with time complexity
that is at least as good any other state-of-the-art exact algorithm, and that
it can also achieve the best known time-space tradeoff. Furthermore,
backtracking's ability to utilize more flexible variable orderings allows us to
prove that it can achieve an exponential speedup over other standard algorithms
for SUMPROD on some instances.
The ideas presented here have been utilized in a number of solvers that have
been applied to various types of sum-of-product problems. These system's have
exploited the fact that backtracking can naturally exploit more of the
problem's structure to achieve improved performance on a range of
probleminstances. Empirical evidence of this performance gain has appeared in
published works describing these solvers, and we provide references to these
works.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:17:49 GMT"
}
]
| 1,389,830,400,000 | [
[
"Bacchus",
"Fahiem",
""
],
[
"Dalmao",
"Shannon",
""
],
[
"Pitassi",
"Toniann",
""
]
]
|
1401.3459 | Maxim Binshtok | Maxim Binshtok, Ronen I. Brafman, Carmel Domshlak, Solomon Eyal
Shimony | Generic Preferences over Subsets of Structured Objects | null | Journal Of Artificial Intelligence Research, Volume 34, pages
133-164, 2009 | 10.1613/jair.2653 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Various tasks in decision making and decision support systems require
selecting a preferred subset of a given set of items. Here we focus on problems
where the individual items are described using a set of characterizing
attributes, and a generic preference specification is required, that is, a
specification that can work with an arbitrary set of items. For example,
preferences over the content of an online newspaper should have this form: At
each viewing, the newspaper contains a subset of the set of articles currently
available. Our preference specification over this subset should be provided
offline, but we should be able to use it to select a subset of any currently
available set of articles, e.g., based on their tags. We present a general
approach for lifting formalisms for specifying preferences over objects with
multiple attributes into ones that specify preferences over subsets of such
objects. We also show how we can compute an optimal subset given such a
specification in a relatively efficient manner. We provide an empirical
evaluation of the approach as well as some worst-case complexity results.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:18:50 GMT"
}
]
| 1,389,830,400,000 | [
[
"Binshtok",
"Maxim",
""
],
[
"Brafman",
"Ronen I.",
""
],
[
"Domshlak",
"Carmel",
""
],
[
"Shimony",
"Solomon Eyal",
""
]
]
|
1401.3460 | Daniel S. Bernstein | Daniel S. Bernstein, Christopher Amato, Eric A. Hansen, Shlomo
Zilberstein | Policy Iteration for Decentralized Control of Markov Decision Processes | null | Journal Of Artificial Intelligence Research, Volume 34, pages
89-132, 2009 | 10.1613/jair.2667 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coordination of distributed agents is required for problems arising in many
areas, including multi-robot systems, networking and e-commerce. As a formal
framework for such problems, we use the decentralized partially observable
Markov decision process (DEC-POMDP). Though much work has been done on optimal
dynamic programming algorithms for the single-agent version of the problem,
optimal algorithms for the multiagent case have been elusive. The main
contribution of this paper is an optimal policy iteration algorithm for solving
DEC-POMDPs. The algorithm uses stochastic finite-state controllers to represent
policies. The solution can include a correlation device, which allows agents to
correlate their actions without communicating. This approach alternates between
expanding the controller and performing value-preserving transformations, which
modify the controller without sacrificing value. We present two efficient
value-preserving transformations: one can reduce the size of the controller and
the other can improve its value while keeping the size fixed. Empirical results
demonstrate the usefulness of value-preserving transformations in increasing
value while keeping controller size to a minimum. To broaden the applicability
of the approach, we also present a heuristic version of the policy iteration
algorithm, which sacrifices convergence to optimality. This algorithm further
reduces the size of the controllers at each step by assuming that probability
distributions over the other agents actions are known. While this assumption
may not hold in general, it helps produce higher quality solutions in our test
problems.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:20:25 GMT"
}
]
| 1,389,830,400,000 | [
[
"Bernstein",
"Daniel S.",
""
],
[
"Amato",
"Christopher",
""
],
[
"Hansen",
"Eric A.",
""
],
[
"Zilberstein",
"Shlomo",
""
]
]
|
1401.3461 | Marek Petrik | Marek Petrik, Shlomo Zilberstein | A Bilinear Programming Approach for Multiagent Planning | null | Journal Of Artificial Intelligence Research, Volume 35, pages
235-274, 2009 | 10.1613/jair.2673 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiagent planning and coordination problems are common and known to be
computationally hard. We show that a wide range of two-agent problems can be
formulated as bilinear programs. We present a successive approximation
algorithm that significantly outperforms the coverage set algorithm, which is
the state-of-the-art method for this class of multiagent problems. Because the
algorithm is formulated for bilinear programs, it is more general and simpler
to implement. The new algorithm can be terminated at any time and-unlike the
coverage set algorithm-it facilitates the derivation of a useful online
performance bound. It is also much more efficient, on average reducing the
computation time of the optimal solution by about four orders of magnitude.
Finally, we introduce an automatic dimensionality reduction method that
improves the effectiveness of the algorithm, extending its applicability to new
domains and providing a new way to analyze a subclass of bilinear programs.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:21:26 GMT"
}
]
| 1,389,830,400,000 | [
[
"Petrik",
"Marek",
""
],
[
"Zilberstein",
"Shlomo",
""
]
]
|
1401.3468 | Hector Geffner | Hector Palacios, Hector Geffner | Compiling Uncertainty Away in Conformant Planning Problems with Bounded
Width | null | Journal Of Artificial Intelligence Research, Volume 35, pages
623-675, 2009 | 10.1613/jair.2708 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conformant planning is the problem of finding a sequence of actions for
achieving a goal in the presence of uncertainty in the initial state or action
effects. The problem has been approached as a path-finding problem in belief
space where good belief representations and heuristics are critical for scaling
up. In this work, a different formulation is introduced for conformant problems
with deterministic actions where they are automatically converted into
classical ones and solved by an off-the-shelf classical planner. The
translation maps literals L and sets of assumptions t about the initial
situation, into new literals KL/t that represent that L must be true if t is
initially true. We lay out a general translation scheme that is sound and
establish the conditions under which the translation is also complete. We show
that the complexity of the complete translation is exponential in a parameter
of the problem called the conformant width, which for most benchmarks is
bounded. The planner based on this translation exhibits good performance in
comparison with existing planners, and is the basis for T0, the best performing
planner in the Conformant Track of the 2006 International Planning Competition.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:27:00 GMT"
}
]
| 1,389,830,400,000 | [
[
"Palacios",
"Hector",
""
],
[
"Geffner",
"Hector",
""
]
]
|
1401.3469 | Vicente Ruiz de Angulo | Vicente Ruiz de Angulo, Carme Torras | Exploiting Single-Cycle Symmetries in Continuous Constraint Problems | null | Journal Of Artificial Intelligence Research, Volume 34, pages
499-520, 2009 | 10.1613/jair.2711 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetries in discrete constraint satisfaction problems have been explored
and exploited in the last years, but symmetries in continuous constraint
problems have not received the same attention. Here we focus on permutations of
the variables consisting of one single cycle. We propose a procedure that takes
advantage of these symmetries by interacting with a continuous constraint
solver without interfering with it. A key concept in this procedure are the
classes of symmetric boxes formed by bisecting a n-dimensional cube at the same
point in all dimensions at the same time. We analyze these classes and quantify
them as a function of the cube dimensionality. Moreover, we propose a simple
algorithm to generate the representatives of all these classes for any number
of variables at very high rates. A problem example from the chemical
and#64257;eld and the cyclic n-roots problem are used to show the performance
of the approach in practice.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:27:27 GMT"
}
]
| 1,389,830,400,000 | [
[
"de Angulo",
"Vicente Ruiz",
""
],
[
"Torras",
"Carme",
""
]
]
|
1401.3470 | J\"org Hoffmann | J\"org Hoffmann, Piergiorgio Bertoli, Malte Helmert, Marco Pistore | Message-Based Web Service Composition, Integrity Constraints, and
Planning under Uncertainty: A New Connection | null | Journal Of Artificial Intelligence Research, Volume 35, pages
49-117, 2009 | 10.1613/jair.2716 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Thanks to recent advances, AI Planning has become the underlying technique
for several applications. Figuring prominently among these is automated Web
Service Composition (WSC) at the "capability" level, where services are
described in terms of preconditions and effects over ontological concepts. A
key issue in addressing WSC as planning is that ontologies are not only formal
vocabularies; they also axiomatize the possible relationships between concepts.
Such axioms correspond to what has been termed "integrity constraints" in the
actions and change literature, and applying a web service is essentially a
belief update operation. The reasoning required for belief update is known to
be harder than reasoning in the ontology itself. The support for belief update
is severely limited in current planning tools.
Our first contribution consists in identifying an interesting special case of
WSC which is both significant and more tractable. The special case, which we
term "forward effects", is characterized by the fact that every ramification of
a web service application involves at least one new constant generated as
output by the web service. We show that, in this setting, the reasoning
required for belief update simplifies to standard reasoning in the ontology
itself. This relates to, and extends, current notions of "message-based" WSC,
where the need for belief update is removed by a strong (often implicit or
informal) assumption of "locality" of the individual messages. We clarify the
computational properties of the forward effects case, and point out a strong
relation to standard notions of planning under uncertainty, suggesting that
effective tools for the latter can be successfully adapted to address the
former.
Furthermore, we identify a significant sub-case, named "strictly forward
effects", where an actual compilation into planning under uncertainty exists.
This enables us to exploit off-the-shelf planning tools to solve message-based
WSC in a general form that involves powerful ontologies, and requires reasoning
about partial matches between concepts. We provide empirical evidence that this
approach may be quite effective, using Conformant-FF as the underlying planner.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:27:56 GMT"
}
]
| 1,389,830,400,000 | [
[
"Hoffmann",
"Jörg",
""
],
[
"Bertoli",
"Piergiorgio",
""
],
[
"Helmert",
"Malte",
""
],
[
"Pistore",
"Marco",
""
]
]
|
1401.3471 | Marco Zaffalon | Marco Zaffalon, Enrique Miranda | Conservative Inference Rule for Uncertain Reasoning under Incompleteness | null | Journal Of Artificial Intelligence Research, Volume 34, pages
757-821, 2009 | 10.1613/jair.2736 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we formulate the problem of inference under incomplete
information in very general terms. This includes modelling the process
responsible for the incompleteness, which we call the incompleteness process.
We allow the process behaviour to be partly unknown. Then we use Walleys theory
of coherent lower previsions, a generalisation of the Bayesian theory to
imprecision, to derive the rule to update beliefs under incompleteness that
logically follows from our assumptions, and that we call conservative inference
rule. This rule has some remarkable properties: it is an abstract rule to
update beliefs that can be applied in any situation or domain; it gives us the
opportunity to be neither too optimistic nor too pessimistic about the
incompleteness process, which is a necessary condition to draw reliable while
strong enough conclusions; and it is a coherent rule, in the sense that it
cannot lead to inconsistencies. We give examples to show how the new rule can
be applied in expert systems, in parametric statistical inference, and in
pattern classification, and discuss more generally the view of incompleteness
processes defended here as well as some of its consequences.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:28:54 GMT"
}
]
| 1,389,830,400,000 | [
[
"Zaffalon",
"Marco",
""
],
[
"Miranda",
"Enrique",
""
]
]
|
1401.3474 | Andreas Krause | Andreas Krause, Carlos Guestrin | Optimal Value of Information in Graphical Models | null | Journal Of Artificial Intelligence Research, Volume 35, pages
557-591, 2009 | 10.1613/jair.2737 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world decision making tasks require us to choose among several
expensive observations. In a sensor network, for example, it is important to
select the subset of sensors that is expected to provide the strongest
reduction in uncertainty. In medical decision making tasks, one needs to select
which tests to administer before deciding on the most effective treatment. It
has been general practice to use heuristic-guided procedures for selecting
observations. In this paper, we present the first efficient optimal algorithms
for selecting observations for a class of probabilistic graphical models. For
example, our algorithms allow to optimally label hidden variables in Hidden
Markov Models (HMMs). We provide results for both selecting the optimal subset
of observations, and for obtaining an optimal conditional observation plan.
Furthermore we prove a surprising result: In most graphical models tasks, if
one designs an efficient algorithm for chain graphs, such as HMMs, this
procedure can be generalized to polytree graphical models. We prove that the
optimizing value of information is $NP^{PP}$-hard even for polytrees. It also
follows from our results that just computing decision theoretic value of
information objective functions, which are commonly used in practice, is a
#P-complete problem even on Naive Bayes models (a simple special case of
polytrees).
In addition, we consider several extensions, such as using our algorithms for
scheduling observation selection for multiple sensors. We demonstrate the
effectiveness of our approach on several real-world datasets, including a
prototype sensor network deployment for energy conservation in buildings.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:30:52 GMT"
}
]
| 1,389,830,400,000 | [
[
"Krause",
"Andreas",
""
],
[
"Guestrin",
"Carlos",
""
]
]
|
1401.3477 | Jos\'e Enrique Gallardo | Jos\'e Enrique Gallardo, Carlos Cotta, Antonio Jos\'e Fern\'andez | Solving Weighted Constraint Satisfaction Problems with Memetic/Exact
Hybrid Algorithms | arXiv admin note: substantial text overlap with arXiv:0812.4170 | Journal Of Artificial Intelligence Research, Volume 35, pages
533-555, 2009 | 10.1613/jair.2770 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A weighted constraint satisfaction problem (WCSP) is a constraint
satisfaction problem in which preferences among solutions can be expressed.
Bucket elimination is a complete technique commonly used to solve this kind of
constraint satisfaction problem. When the memory required to apply bucket
elimination is too high, a heuristic method based on it (denominated
mini-buckets) can be used to calculate bounds for the optimal solution.
Nevertheless, the curse of dimensionality makes these techniques impractical on
large scale problems. In response to this situation, we present a memetic
algorithm for WCSPs in which bucket elimination is used as a mechanism for
recombining solutions, providing the best possible child from the parental set.
Subsequently, a multi-level model in which this exact/metaheuristic hybrid is
further hybridized with branch-and-bound techniques and mini-buckets is
studied. As a case study, we have applied these algorithms to the resolution of
the maximum density still life problem, a hard constraint optimization problem
based on Conways game of life. The resulting algorithm consistently finds
optimal patterns for up to date solved instances in less time than current
approaches. Moreover, it is shown that this proposal provides new best known
solutions for very large instances.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:32:38 GMT"
}
]
| 1,389,830,400,000 | [
[
"Gallardo",
"José Enrique",
""
],
[
"Cotta",
"Carlos",
""
],
[
"Fernández",
"Antonio José",
""
]
]
|
1401.3481 | Matthias Zytnicki | Matthias Zytnicki, Christine Gaspin, Simon de Givry, Thomas Schiex | Bounds Arc Consistency for Weighted CSPs | null | Journal Of Artificial Intelligence Research, Volume 35, pages
593-621, 2009 | 10.1613/jair.2797 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Weighted Constraint Satisfaction Problem (WCSP) framework allows
representing and solving problems involving both hard constraints and cost
functions. It has been applied to various problems, including resource
allocation, bioinformatics, scheduling, etc. To solve such problems, solvers
usually rely on branch-and-bound algorithms equipped with local consistency
filtering, mostly soft arc consistency. However, these techniques are not well
suited to solve problems with very large domains. Motivated by the resolution
of an RNA gene localization problem inside large genomic sequences, and in the
spirit of bounds consistency for large domains in crisp CSPs, we introduce soft
bounds arc consistency, a new weighted local consistency specifically designed
for WCSP with very large domains. Compared to soft arc consistency, BAC
provides significantly improved time and space asymptotic complexity. In this
paper, we show how the semantics of cost functions can be exploited to further
improve the time complexity of BAC. We also compare both in theory and in
practice the efficiency of BAC on a WCSP with bounds consistency enforced on a
crisp CSP using cost variables. On two different real problems modeled as WCSP,
including our RNA gene localization problem, we observe that maintaining bounds
arc consistency outperforms arc consistency and also improves over bounds
consistency enforced on a constraint model with cost variables.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:34:30 GMT"
}
]
| 1,389,830,400,000 | [
[
"Zytnicki",
"Matthias",
""
],
[
"Gaspin",
"Christine",
""
],
[
"de Givry",
"Simon",
""
],
[
"Schiex",
"Thomas",
""
]
]
|
1401.3483 | Hai Leong Chieu | Hai Leong Chieu, Wee Sun Sun Lee | Relaxed Survey Propagation for The Weighted Maximum Satisfiability
Problem | null | Journal Of Artificial Intelligence Research, Volume 36, pages
229-266, 2009 | 10.1613/jair.2808 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The survey propagation (SP) algorithm has been shown to work well on large
instances of the random 3-SAT problem near its phase transition. It was shown
that SP estimates marginals over covers that represent clusters of solutions.
The SP-y algorithm generalizes SP to work on the maximum satisfiability
(Max-SAT) problem, but the cover interpretation of SP does not generalize to
SP-y. In this paper, we formulate the relaxed survey propagation (RSP)
algorithm, which extends the SP algorithm to apply to the weighted Max-SAT
problem. We show that RSP has an interpretation of estimating marginals over
covers violating a set of clauses with minimal weight. This naturally
generalizes the cover interpretation of SP. Empirically, we show that RSP
outperforms SP-y and other state-of-the-art Max-SAT solvers on random Max-SAT
instances. RSP also outperforms state-of-the-art weighted Max-SAT solvers on
random weighted Max-SAT instances.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:36:10 GMT"
}
]
| 1,389,830,400,000 | [
[
"Chieu",
"Hai Leong",
""
],
[
"Lee",
"Wee Sun Sun",
""
]
]
|
1401.3486 | Anders Jonsson | Anders Jonsson | The Role of Macros in Tractable Planning | null | Journal Of Artificial Intelligence Research, Volume 36, pages
471-511, 2009 | 10.1613/jair.2891 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents several new tractability results for planning based on
macros. We describe an algorithm that optimally solves planning problems in a
class that we call inverted tree reducible, and is provably tractable for
several subclasses of this class. By using macros to store partial plans that
recur frequently in the solution, the algorithm is polynomial in time and space
even for exponentially long plans. We generalize the inverted tree reducible
class in several ways and describe modifications of the algorithm to deal with
these new classes. Theoretical results are validated in experiments.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:37:49 GMT"
}
]
| 1,389,830,400,000 | [
[
"Jonsson",
"Anders",
""
]
]
|
1401.3489 | Robert Mateescu | Robert Mateescu, Kalev Kask, Vibhav Gogate, Rina Dechter | Join-Graph Propagation Algorithms | null | Journal Of Artificial Intelligence Research, Volume 37, pages
279-328, 2010 | 10.1613/jair.2842 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper investigates parameterized approximate message-passing schemes that
are based on bounded inference and are inspired by Pearl's belief propagation
algorithm (BP). We start with the bounded inference mini-clustering algorithm
and then move to the iterative scheme called Iterative Join-Graph Propagation
(IJGP), that combines both iteration and bounded inference. Algorithm IJGP
belongs to the class of Generalized Belief Propagation algorithms, a framework
that allowed connections with approximate algorithms from statistical physics
and is shown empirically to surpass the performance of mini-clustering and
belief propagation, as well as a number of other state-of-the-art algorithms on
several classes of networks. We also provide insight into the accuracy of
iterative BP and IJGP by relating these algorithms to well known classes of
constraint propagation schemes.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:38:39 GMT"
}
]
| 1,389,830,400,000 | [
[
"Mateescu",
"Robert",
""
],
[
"Kask",
"Kalev",
""
],
[
"Gogate",
"Vibhav",
""
],
[
"Dechter",
"Rina",
""
]
]
|
1401.3490 | William Yeoh | William Yeoh, Ariel Felner, Sven Koenig | BnB-ADOPT: An Asynchronous Branch-and-Bound DCOP Algorithm | null | Journal Of Artificial Intelligence Research, Volume 38, pages
85-133, 2010 | 10.1613/jair.2849 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed constraint optimization (DCOP) problems are a popular way of
formulating and solving agent-coordination problems. A DCOP problem is a
problem where several agents coordinate their values such that the sum of the
resulting constraint costs is minimal. It is often desirable to solve DCOP
problems with memory-bounded and asynchronous algorithms. We introduce
Branch-and-Bound ADOPT (BnB-ADOPT), a memory-bounded asynchronous DCOP search
algorithm that uses the message-passing and communication framework of ADOPT
(Modi, Shen, Tambe, and Yokoo, 2005), a well known memory-bounded asynchronous
DCOP search algorithm, but changes the search strategy of ADOPT from best-first
search to depth-first branch-and-bound search. Our experimental results show
that BnB-ADOPT finds cost-minimal solutions up to one order of magnitude faster
than ADOPT for a variety of large DCOP problems and is as fast as NCBB, a
memory-bounded synchronous DCOP search algorithm, for most of these DCOP
problems. Additionally, it is often desirable to find bounded-error solutions
for DCOP problems within a reasonable amount of time since finding cost-minimal
solutions is NP-hard. The existing bounded-error approximation mechanism allows
users only to specify an absolute error bound on the solution cost but a
relative error bound is often more intuitive. Thus, we present two new
bounded-error approximation mechanisms that allow for relative error bounds and
implement them on top of BnB-ADOPT.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:39:26 GMT"
}
]
| 1,389,830,400,000 | [
[
"Yeoh",
"William",
""
],
[
"Felner",
"Ariel",
""
],
[
"Koenig",
"Sven",
""
]
]
|
1401.3491 | Emil Keyder | Emil Keyder, Hector Geffner | Soft Goals Can Be Compiled Away | null | Journal Of Artificial Intelligence Research, Volume 36, pages
547-556, 2009 | 10.1613/jair.2857 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Soft goals extend the classical model of planning with a simple model of
preferences. The best plans are then not the ones with least cost but the ones
with maximum utility, where the utility of a plan is the sum of the utilities
of the soft goals achieved minus the plan cost. Finding plans with high utility
appears to involve two linked problems: choosing a subset of soft goals to
achieve and finding a low-cost plan to achieve them. New search algorithms and
heuristics have been developed for planning with soft goals, and a new track
has been introduced in the International Planning Competition (IPC) to test
their performance. In this note, we show however that these extensions are not
needed: soft goals do not increase the expressive power of the basic model of
planning with action costs, as they can easily be compiled away. We apply this
compilation to the problems of the net-benefit track of the most recent IPC,
and show that optimal and satisficing cost-based planners do better on the
compiled problems than optimal and satisficing net-benefit planners on the
original problems with explicit soft goals. Furthermore, we show that
penalties, or negative preferences expressing conditions to avoid, can also be
compiled away using a similar idea.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:39:49 GMT"
}
]
| 1,389,830,400,000 | [
[
"Keyder",
"Emil",
""
],
[
"Geffner",
"Hector",
""
]
]
|
1401.3492 | Frank Hutter | Frank Hutter, Thomas Stuetzle, Kevin Leyton-Brown, Holger H. Hoos | ParamILS: An Automatic Algorithm Configuration Framework | null | Journal Of Artificial Intelligence Research, Volume 36, pages
267-306, 2009 | 10.1613/jair.2861 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of performance-optimizing parameter settings is an
important part of the development and application of algorithms. We describe an
automatic framework for this algorithm configuration problem. More formally, we
provide methods for optimizing a target algorithm's performance on a given
class of problem instances by varying a set of ordinal and/or categorical
parameters. We review a family of local-search-based algorithm configuration
procedures and present novel techniques for accelerating them by adaptively
limiting the time spent for evaluating individual configurations. We describe
the results of a comprehensive experimental evaluation of our methods, based on
the configuration of prominent complete and incomplete algorithms for SAT. We
also present what is, to our knowledge, the first published work on
automatically configuring the CPLEX mixed integer programming solver. All the
algorithms we considered had default parameter settings that were manually
identified with considerable effort. Nevertheless, using our automated
algorithm configuration procedures, we achieved substantial and consistent
performance improvements.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:40:11 GMT"
}
]
| 1,389,830,400,000 | [
[
"Hutter",
"Frank",
""
],
[
"Stuetzle",
"Thomas",
""
],
[
"Leyton-Brown",
"Kevin",
""
],
[
"Hoos",
"Holger H.",
""
]
]
|
1401.3493 | Uzi Zahavi | Uzi Zahavi, Ariel Felner, Neil Burch, Robert C. Holte | Predicting the Performance of IDA* using Conditional Distributions | null | Journal Of Artificial Intelligence Research, Volume 37, pages
41-83, 2010 | 10.1613/jair.2890 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Korf, Reid, and Edelkamp introduced a formula to predict the number of nodes
IDA* will expand on a single iteration for a given consistent heuristic, and
experimentally demonstrated that it could make very accurate predictions. In
this paper we show that, in addition to requiring the heuristic to be
consistent, their formulas predictions are accurate only at levels of the
brute-force search tree where the heuristic values obey the unconditional
distribution that they defined and then used in their formula. We then propose
a new formula that works well without these requirements, i.e., it can make
accurate predictions of IDA*s performance for inconsistent heuristics and if
the heuristic values in any level do not obey the unconditional distribution.
In order to achieve this we introduce the conditional distribution of heuristic
values which is a generalization of their unconditional heuristic distribution.
We also provide extensions of our formula that handle individual start states
and the augmentation of IDA* with bidirectional pathmax (BPMX), a technique for
propagating heuristic values when inconsistent heuristics are used.
Experimental results demonstrate the accuracy of our new method and all its
variations.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2014 05:41:44 GMT"
}
]
| 1,389,830,400,000 | [
[
"Zahavi",
"Uzi",
""
],
[
"Felner",
"Ariel",
""
],
[
"Burch",
"Neil",
""
],
[
"Holte",
"Robert C.",
""
]
]
|
1401.3827 | Ruijie He | Ruijie He, Emma Brunskill, Nicholas Roy | Efficient Planning under Uncertainty with Macro-actions | null | Journal Of Artificial Intelligence Research, Volume 40, pages
523-570, 2011 | 10.1613/jair.3171 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deciding how to act in partially observable environments remains an active
area of research. Identifying good sequences of decisions is particularly
challenging when good control performance requires planning multiple steps into
the future in domains with many states. Towards addressing this challenge, we
present an online, forward-search algorithm called the Posterior Belief
Distribution (PBD). PBD leverages a novel method for calculating the posterior
distribution over beliefs that result after a sequence of actions is taken,
given the set of observation sequences that could be received during this
process. This method allows us to efficiently evaluate the expected reward of a
sequence of primitive actions, which we refer to as macro-actions. We present a
formal analysis of our approach, and examine its performance on two very large
simulation experiments: scientific exploration and a target monitoring domain.
We also demonstrate our algorithm being used to control a real robotic
helicopter in a target monitoring experiment, which suggests that our approach
has practical potential for planning in real-world, large partially observable
domains where a multi-step lookahead is required to achieve good performance.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:36:09 GMT"
}
]
| 1,389,916,800,000 | [
[
"He",
"Ruijie",
""
],
[
"Brunskill",
"Emma",
""
],
[
"Roy",
"Nicholas",
""
]
]
|
1401.3830 | Henrik Reif Andersen | Henrik Reif Andersen, Tarik Hadzic, David Pisinger | Interactive Cost Configuration Over Decision Diagrams | null | Journal Of Artificial Intelligence Research, Volume 37, pages
99-139, 2010 | 10.1613/jair.2905 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many AI domains such as product configuration, a user should interactively
specify a solution that must satisfy a set of constraints. In such scenarios,
offline compilation of feasible solutions into a tractable representation is an
important approach to delivering efficient backtrack-free user interaction
online. In particular,binary decision diagrams (BDDs) have been successfully
used as a compilation target for product and service configuration. In this
paper we discuss how to extend BDD-based configuration to scenarios involving
cost functions which express user preferences.
We first show that an efficient, robust and easy to implement extension is
possible if the cost function is additive, and feasible solutions are
represented using multi-valued decision diagrams (MDDs). We also discuss the
effect on MDD size if the cost function is non-additive or if it is encoded
explicitly into MDD. We then discuss interactive configuration in the presence
of multiple cost functions. We prove that even in its simplest form,
multiple-cost configuration is NP-hard in the input MDD. However, for solving
two-cost configuration we develop a pseudo-polynomial scheme and a fully
polynomial approximation scheme. The applicability of our approach is
demonstrated through experiments over real-world configuration models and
product-catalogue datasets. Response times are generally within a fraction of a
second even for very large instances.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:48:15 GMT"
}
]
| 1,389,916,800,000 | [
[
"Andersen",
"Henrik Reif",
""
],
[
"Hadzic",
"Tarik",
""
],
[
"Pisinger",
"David",
""
]
]
|
1401.3831 | Raghav Aras | Raghav Aras, Alain Dutech | An Investigation into Mathematical Programming for Finite Horizon
Decentralized POMDPs | null | Journal Of Artificial Intelligence Research, Volume 37, pages
329-396, 2010 | 10.1613/jair.2915 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized planning in uncertain environments is a complex task generally
dealt with by using a decision-theoretic approach, mainly through the framework
of Decentralized Partially Observable Markov Decision Processes (DEC-POMDPs).
Although DEC-POMDPS are a general and powerful modeling tool, solving them is a
task with an overwhelming complexity that can be doubly exponential. In this
paper, we study an alternate formulation of DEC-POMDPs relying on a
sequence-form representation of policies. From this formulation, we show how to
derive Mixed Integer Linear Programming (MILP) problems that, once solved, give
exact optimal solutions to the DEC-POMDPs. We show that these MILPs can be
derived either by using some combinatorial characteristics of the optimal
solutions of the DEC-POMDPs or by using concepts borrowed from game theory.
Through an experimental validation on classical test problems from the
DEC-POMDP literature, we compare our approach to existing algorithms. Results
show that mathematical programming outperforms dynamic programming but is less
efficient than forward search, except for some particular problems. The main
contributions of this work are the use of mathematical programming for
DEC-POMDPs and a better understanding of DEC-POMDPs and of their solutions.
Besides, we argue that our alternate representation of DEC-POMDPs could be
helpful for designing novel algorithms looking for approximate solutions to
DEC-POMDPs.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:49:14 GMT"
}
]
| 1,389,916,800,000 | [
[
"Aras",
"Raghav",
""
],
[
"Dutech",
"Alain",
""
]
]
|
1401.3833 | Bozhena Bidyuk | Bozhena Bidyuk, Rina Dechter, Emma Rollon | Active Tuples-based Scheme for Bounding Posterior Beliefs | null | Journal Of Artificial Intelligence Research, Volume 39, pages
335-371, 2010 | 10.1613/jair.2945 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents a scheme for computing lower and upper bounds on the
posterior marginals in Bayesian networks with discrete variables. Its power
lies in its ability to use any available scheme that bounds the probability of
evidence or posterior marginals and enhance its performance in an anytime
manner. The scheme uses the cutset conditioning principle to tighten existing
bounding schemes and to facilitate anytime behavior, utilizing a fixed number
of cutset tuples. The accuracy of the bounds improves as the number of used
cutset tuples increases and so does the computation time. We demonstrate
empirically the value of our scheme for bounding posterior marginals and
probability of evidence using a variant of the bound propagation algorithm as a
plug-in scheme.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:50:19 GMT"
}
]
| 1,389,916,800,000 | [
[
"Bidyuk",
"Bozhena",
""
],
[
"Dechter",
"Rina",
""
],
[
"Rollon",
"Emma",
""
]
]
|
1401.3835 | Ivan Jos\'e Varzinczak | Ivan Jos\'e Varzinczak | On Action Theory Change | null | Journal Of Artificial Intelligence Research, Volume 37, pages
189-246, 2010 | 10.1613/jair.2959 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As historically acknowledged in the Reasoning about Actions and Change
community, intuitiveness of a logical domain description cannot be fully
automated. Moreover, like any other logical theory, action theories may also
evolve, and thus knowledge engineers need revision methods to help in
accommodating new incoming information about the behavior of actions in an
adequate manner. The present work is about changing action domain descriptions
in multimodal logic. Its contribution is threefold: first we revisit the
semantics of action theory contraction proposed in previous work, giving more
robust operators that express minimal change based on a notion of distance
between Kripke-models. Second we give algorithms for syntactical action theory
contraction and establish their correctness with respect to our semantics for
those action theories that satisfy a principle of modularity investigated in
previous work. Since modularity can be ensured for every action theory and, as
we show here, needs to be computed at most once during the evolution of a
domain description, it does not represent a limitation at all to the method
here studied. Finally we state AGM-like postulates for action theory
contraction and assess the behavior of our operators with respect to them.
Moreover, we also address the revision counterpart of action theory change,
showing that it benefits from our semantics for contraction.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:51:08 GMT"
}
]
| 1,389,916,800,000 | [
[
"Varzinczak",
"Ivan José",
""
]
]
|
1401.3838 | Claudette Cayrol | Claudette Cayrol, Florence Dupin de Saint-Cyr, Marie-Christine
Lagasquie-Schiex | Change in Abstract Argumentation Frameworks: Adding an Argument | null | Journal Of Artificial Intelligence Research, Volume 38, pages
49-84, 2010 | 10.1613/jair.2965 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the problem of change in an abstract argumentation
system. We focus on a particular change: the addition of a new argument which
interacts with previous arguments. We study the impact of such an addition on
the outcome of the argumentation system, more particularly on the set of its
extensions. Several properties for this change operation are defined by
comparing the new set of extensions to the initial one, these properties are
called structural when the comparisons are based on set-cardinality or
set-inclusion relations. Several other properties are proposed where
comparisons are based on the status of some particular arguments: the accepted
arguments; these properties refer to the evolution of this status during the
change, e.g., Monotony and Priority to Recency. All these properties may be
more or less desirable according to specific applications. They are studied
under two particular semantics: the grounded and preferred semantics.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:52:08 GMT"
}
]
| 1,389,916,800,000 | [
[
"Cayrol",
"Claudette",
""
],
[
"de Saint-Cyr",
"Florence Dupin",
""
],
[
"Lagasquie-Schiex",
"Marie-Christine",
""
]
]
|
1401.3839 | Silvia Richter | Silvia Richter, Matthias Westphal | The LAMA Planner: Guiding Cost-Based Anytime Planning with Landmarks | null | Journal Of Artificial Intelligence Research, Volume 39, pages
127-177, 2010 | 10.1613/jair.2972 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LAMA is a classical planning system based on heuristic forward search. Its
core feature is the use of a pseudo-heuristic derived from landmarks,
propositional formulas that must be true in every solution of a planning task.
LAMA builds on the Fast Downward planning system, using finite-domain rather
than binary state variables and multi-heuristic search. The latter is employed
to combine the landmark heuristic with a variant of the well-known FF
heuristic. Both heuristics are cost-sensitive, focusing on high-quality
solutions in the case where actions have non-uniform cost. A weighted A* search
is used with iteratively decreasing weights, so that the planner continues to
search for plans of better quality until the search is terminated. LAMA showed
best performance among all planners in the sequential satisficing track of the
International Planning Competition 2008. In this paper we present the system in
detail and investigate which features of LAMA are crucial for its performance.
We present individual results for some of the domains used at the competition,
demonstrating good and bad cases for the techniques implemented in LAMA.
Overall, we find that using landmarks improves performance, whereas the
incorporation of action costs into the heuristic estimators proves not to be
beneficial. We show that in some domains a search that ignores cost solves far
more problems, raising the question of how to deal with action costs more
effectively in the future. The iterated weighted A* search greatly improves
results, and shows synergy effects with the use of landmarks.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:52:55 GMT"
}
]
| 1,389,916,800,000 | [
[
"Richter",
"Silvia",
""
],
[
"Westphal",
"Matthias",
""
]
]
|
1401.3841 | Mark Owen Riedl | Mark Owen Riedl, Robert Michael Young | Narrative Planning: Balancing Plot and Character | null | Journal Of Artificial Intelligence Research, Volume 39, pages
217-268, 2010 | 10.1613/jair.2989 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Narrative, and in particular storytelling, is an important part of the human
experience. Consequently, computational systems that can reason about narrative
can be more effective communicators, entertainers, educators, and trainers. One
of the central challenges in computational narrative reasoning is narrative
generation, the automated creation of meaningful event sequences. There are
many factors -- logical and aesthetic -- that contribute to the success of a
narrative artifact. Central to this success is its understandability. We argue
that the following two attributes of narratives are universal: (a) the logical
causal progression of plot, and (b) character believability. Character
believability is the perception by the audience that the actions performed by
characters do not negatively impact the audiences suspension of disbelief.
Specifically, characters must be perceived by the audience to be intentional
agents. In this article, we explore the use of refinement search as a technique
for solving the narrative generation problem -- to find a sound and believable
sequence of character actions that transforms an initial world state into a
world state in which goal propositions hold. We describe a novel refinement
search planning algorithm -- the Intent-based Partial Order Causal Link (IPOCL)
planner -- that, in addition to creating causally sound plot progression,
reasons about character intentionality by identifying possible character goals
that explain their actions and creating plan structures that explain why those
characters commit to their goals. We present the results of an empirical
evaluation that demonstrates that narrative plans generated by the IPOCL
algorithm support audience comprehension of character intentions better than
plans generated by conventional partial-order planners.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:54:07 GMT"
}
]
| 1,389,916,800,000 | [
[
"Riedl",
"Mark Owen",
""
],
[
"Young",
"Robert Michael",
""
]
]
|
1401.3842 | David Lesaint | David Lesaint, Deepak Mehta, Barry O'Sullivan, Luis Quesada, Nic
Wilson | Developing Approaches for Solving a Telecommunications Feature
Subscription Problem | null | Journal Of Artificial Intelligence Research, Volume 38, pages
271-305, 2010 | 10.1613/jair.2992 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Call control features (e.g., call-divert, voice-mail) are primitive options
to which users can subscribe off-line to personalise their service. The
configuration of a feature subscription involves choosing and sequencing
features from a catalogue and is subject to constraints that prevent
undesirable feature interactions at run-time. When the subscription requested
by a user is inconsistent, one problem is to find an optimal relaxation, which
is a generalisation of the feedback vertex set problem on directed graphs, and
thus it is an NP-hard task. We present several constraint programming
formulations of the problem. We also present formulations using partial
weighted maximum Boolean satisfiability and mixed integer linear programming.
We study all these formulations by experimentally comparing them on a variety
of randomly generated instances of the feature subscription problem.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:54:27 GMT"
}
]
| 1,389,916,800,000 | [
[
"Lesaint",
"David",
""
],
[
"Mehta",
"Deepak",
""
],
[
"O'Sullivan",
"Barry",
""
],
[
"Quesada",
"Luis",
""
],
[
"Wilson",
"Nic",
""
]
]
|
1401.3846 | Graeme Gange | Graeme Gange, Peter James Stuckey, Vitaly Lagoon | Fast Set Bounds Propagation Using a BDD-SAT Hybrid | null | Journal Of Artificial Intelligence Research, Volume 38, pages
307-338, 2010 | 10.1613/jair.3014 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binary Decision Diagram (BDD) based set bounds propagation is a powerful
approach to solving set-constraint satisfaction problems. However, prior BDD
based techniques in- cur the significant overhead of constructing and
manipulating graphs during search. We present a set-constraint solver which
combines BDD-based set-bounds propagators with the learning abilities of a
modern SAT solver. Together with a number of improvements beyond the basic
algorithm, this solver is highly competitive with existing propagation based
set constraint solvers.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:56:56 GMT"
}
]
| 1,389,916,800,000 | [
[
"Gange",
"Graeme",
""
],
[
"Stuckey",
"Peter James",
""
],
[
"Lagoon",
"Vitaly",
""
]
]
|
1401.3847 | Jia-Hong Wu | Jia-Hong Wu, Robert Givan | Automatic Induction of Bellman-Error Features for Probabilistic Planning | null | Journal Of Artificial Intelligence Research, Volume 38, pages
687-755, 2010 | 10.1613/jair.3021 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain-specific features are important in representing problem structure
throughout machine learning and decision-theoretic planning. In planning, once
state features are provided, domain-independent algorithms such as approximate
value iteration can learn weighted combinations of those features that often
perform well as heuristic estimates of state value (e.g., distance to the
goal). Successful applications in real-world domains often require features
crafted by human experts. Here, we propose automatic processes for learning
useful domain-specific feature sets with little or no human intervention. Our
methods select and add features that describe state-space regions of high
inconsistency in the Bellman equation (statewise Bellman error) during
approximate value iteration. Our method can be applied using any
real-valued-feature hypothesis space and corresponding learning method for
selecting features from training sets of state-value pairs. We evaluate the
method with hypothesis spaces defined by both relational and propositional
feature languages, using nine probabilistic planning domains. We show that
approximate value iteration using a relational feature space performs at the
state-of-the-art in domain-independent stochastic relational planning. Our
method provides the first domain-independent approach that plays Tetris
successfully (without human-engineered features).
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:57:22 GMT"
}
]
| 1,389,916,800,000 | [
[
"Wu",
"Jia-Hong",
""
],
[
"Givan",
"Robert",
""
]
]
|
1401.3848 | Alexander Feldman | Alexander Feldman, Gregory Provan, Arjan van Gemund | Approximate Model-Based Diagnosis Using Greedy Stochastic Search | null | Journal Of Artificial Intelligence Research, Volume 38, pages
371-413, 2010 | 10.1613/jair.3025 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a StochAstic Fault diagnosis AlgoRIthm, called SAFARI, which
trades off guarantees of computing minimal diagnoses for computational
efficiency. We empirically demonstrate, using the 74XXX and ISCAS-85 suites of
benchmark combinatorial circuits, that SAFARI achieves several
orders-of-magnitude speedup over two well-known deterministic algorithms, CDA*
and HA*, for multiple-fault diagnoses; further, SAFARI can compute a range of
multiple-fault diagnoses that CDA* and HA* cannot. We also prove that SAFARI is
optimal for a range of propositional fault models, such as the widely-used
weak-fault models (models with ignorance of abnormal behavior). We discuss the
optimality of SAFARI in a class of strong-fault circuit models with stuck-at
failure modes. By modeling the algorithm itself as a Markov chain, we provide
exact bounds on the minimality of the diagnosis computed. SAFARI also displays
strong anytime behavior, and will return a diagnosis after any non-trivial
inference time.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:57:50 GMT"
}
]
| 1,389,916,800,000 | [
[
"Feldman",
"Alexander",
""
],
[
"Provan",
"Gregory",
""
],
[
"van Gemund",
"Arjan",
""
]
]
|
1401.3850 | Alexander Feldman | Alexander Feldman, Gregory Provan, Arjan van Gemund | A Model-Based Active Testing Approach to Sequential Diagnosis | null | Journal Of Artificial Intelligence Research, Volume 39, pages
301-334, 2010 | 10.1613/jair.3031 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-based diagnostic reasoning often leads to a large number of diagnostic
hypotheses. The set of diagnoses can be reduced by taking into account extra
observations (passive monitoring), measuring additional variables (probing) or
executing additional tests (sequential diagnosis/test sequencing). In this
paper we combine the above approaches with techniques from Automated Test
Pattern Generation (ATPG) and Model-Based Diagnosis (MBD) into a framework
called FRACTAL (FRamework for ACtive Testing ALgorithms). Apart from the inputs
and outputs that connect a system to its environment, in active testing we
consider additional input variables to which a sequence of test vectors can be
supplied. We address the computationally hard problem of computing optimal
control assignments (as defined in FRACTAL) in terms of a greedy approximation
algorithm called FRACTAL-G. We compare the decrease in the number of remaining
minimal cardinality diagnoses of FRACTAL-G to that of two more FRACTAL
algorithms: FRACTAL-ATPG and FRACTAL-P. FRACTAL-ATPG is based on ATPG and
sequential diagnosis while FRACTAL-P is based on probing and, although not an
active testing algorithm, provides a baseline for comparing the lower bound on
the number of reachable diagnoses for the FRACTAL algorithms. We empirically
evaluate the trade-offs of the three FRACTAL algorithms by performing extensive
experimentation on the ISCAS85/74XXX benchmark of combinational circuits.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:58:46 GMT"
}
]
| 1,389,916,800,000 | [
[
"Feldman",
"Alexander",
""
],
[
"Provan",
"Gregory",
""
],
[
"van Gemund",
"Arjan",
""
]
]
|
1401.3853 | Michael Katz | Michael Katz, Carmel Domshlak | Implicit Abstraction Heuristics | null | Journal Of Artificial Intelligence Research, Volume 39, pages
51-126, 2010 | 10.1613/jair.3063 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-space search with explicit abstraction heuristics is at the state of
the art of cost-optimal planning. These heuristics are inherently limited,
nonetheless, because the size of the abstract space must be bounded by some,
even if a very large, constant. Targeting this shortcoming, we introduce the
notion of (additive) implicit abstractions, in which the planning task is
abstracted by instances of tractable fragments of optimal planning. We then
introduce a concrete setting of this framework, called fork-decomposition, that
is based on two novel fragments of tractable cost-optimal planning. The induced
admissible heuristics are then studied formally and empirically. This study
testifies for the accuracy of the fork decomposition heuristics, yet our
empirical evaluation also stresses the tradeoff between their accuracy and the
runtime complexity of computing them. Indeed, some of the power of the explicit
abstraction heuristics comes from precomputing the heuristic function offline
and then determining h(s) for each evaluated state s by a very fast lookup in a
database. By contrast, while fork-decomposition heuristics can be calculated in
polynomial time, computing them is far from being fast. To address this
problem, we show that the time-per-node complexity bottleneck of the
fork-decomposition heuristics can be successfully overcome. We demonstrate that
an equivalent of the explicit abstraction notion of a database exists for the
fork-decomposition abstractions as well, despite their exponential-size
abstract spaces. We then verify empirically that heuristic search with the
databased" fork-decomposition heuristics favorably competes with the state of
the art of cost-optimal planning.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 04:59:55 GMT"
}
]
| 1,389,916,800,000 | [
[
"Katz",
"Michael",
""
],
[
"Domshlak",
"Carmel",
""
]
]
|
1401.3854 | Bonny Banerjee | Bonny Banerjee, B. Chandrasekaran | A Constraint Satisfaction Framework for Executing Perceptions and
Actions in Diagrammatic Reasoning | null | Journal Of Artificial Intelligence Research, Volume 39, pages
373-427, 2010 | 10.1613/jair.3069 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diagrammatic reasoning (DR) is pervasive in human problem solving as a
powerful adjunct to symbolic reasoning based on language-like representations.
The research reported in this paper is a contribution to building a general
purpose DR system as an extension to a SOAR-like problem solving architecture.
The work is in a framework in which DR is modeled as a process where subtasks
are solved, as appropriate, either by inference from symbolic representations
or by interaction with a diagram, i.e., perceiving specified information from a
diagram or modifying/creating objects in a diagram in specified ways according
to problem solving needs. The perceptions and actions in most DR systems built
so far are hand-coded for the specific application, even when the rest of the
system is built using the general architecture. The absence of a general
framework for executing perceptions/actions poses as a major hindrance to using
them opportunistically -- the essence of open-ended search in problem solving.
Our goal is to develop a framework for executing a wide variety of specified
perceptions and actions across tasks/domains without human intervention. We
observe that the domain/task-specific visual perceptions/actions can be
transformed into domain/task-independent spatial problems. We specify a spatial
problem as a quantified constraint satisfaction problem in the real domain
using an open-ended vocabulary of properties, relations and actions involving
three kinds of diagrammatic objects -- points, curves, regions. Solving a
spatial problem from this specification requires computing the equivalent
simplified quantifier-free expression, the complexity of which is inherently
doubly exponential. We represent objects as configuration of simple elements to
facilitate decomposition of complex problems into simpler and similar
subproblems. We show that, if the symbolic solution to a subproblem can be
expressed concisely, quantifiers can be eliminated from spatial problems in
low-order polynomial time using similar previously solved subproblems. This
requires determining the similarity of two problems, the existence of a mapping
between them computable in polynomial time, and designing a memory for storing
previously solved problems so as to facilitate search. The efficacy of the idea
is shown by time complexity analysis. We demonstrate the proposed approach by
executing perceptions and actions involved in DR tasks in two army
applications.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:00:46 GMT"
}
]
| 1,389,916,800,000 | [
[
"Banerjee",
"Bonny",
""
],
[
"Chandrasekaran",
"B.",
""
]
]
|
1401.3857 | Vadim Bulitko | Vadim Bulitko, Yngvi Bj\"ornsson, Ramon Lawrence | Case-Based Subgoaling in Real-Time Heuristic Search for Video Game
Pathfinding | null | Journal Of Artificial Intelligence Research, Volume 39, pages
269-300, 2010 | 10.1613/jair.3076 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time heuristic search algorithms satisfy a constant bound on the amount
of planning per action, independent of problem size. As a result, they scale up
well as problems become larger. This property would make them well suited for
video games where Artificial Intelligence controlled agents must react quickly
to user commands and to other agents actions. On the downside, real-time search
algorithms employ learning methods that frequently lead to poor solution
quality and cause the agent to appear irrational by re-visiting the same
problem states repeatedly. The situation changed recently with a new algorithm,
D LRTA*, which attempted to eliminate learning by automatically selecting
subgoals. D LRTA* is well poised for video games, except it has a complex and
memory-demanding pre-computation phase during which it builds a database of
subgoals. In this paper, we propose a simpler and more memory-efficient way of
pre-computing subgoals thereby eliminating the main obstacle to applying
state-of-the-art real-time search methods in video games. The new algorithm
solves a number of randomly chosen problems off-line, compresses the solutions
into a series of subgoals and stores them in a database. When presented with a
novel problem on-line, it queries the database for the most similar previously
solved case and uses its subgoals to solve the problem. In the domain of
pathfinding on four large video game maps, the new algorithm delivers solutions
eight times better while using 57 times less memory and requiring 14% less
pre-computation time.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:02:02 GMT"
}
]
| 1,389,916,800,000 | [
[
"Bulitko",
"Vadim",
""
],
[
"Björnsson",
"Yngvi",
""
],
[
"Lawrence",
"Ramon",
""
]
]
|
1401.3860 | Tobias Lang | Tobias Lang, Marc Toussaint | Planning with Noisy Probabilistic Relational Rules | null | Journal Of Artificial Intelligence Research, Volume 39, pages
1-49, 2010 | 10.1613/jair.3093 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Noisy probabilistic relational rules are a promising world model
representation for several reasons. They are compact and generalize over world
instantiations. They are usually interpretable and they can be learned
effectively from the action experiences in complex worlds. We investigate
reasoning with such rules in grounded relational domains. Our algorithms
exploit the compactness of rules for efficient and flexible decision-theoretic
planning. As a first approach, we combine these rules with the Upper Confidence
Bounds applied to Trees (UCT) algorithm based on look-ahead trees. Our second
approach converts these rules into a structured dynamic Bayesian network
representation and predicts the effects of action sequences using approximate
inference and beliefs over world states. We evaluate the effectiveness of our
approaches for planning in a simulated complex 3D robot manipulation scenario
with an articulated manipulator and realistic physics and in domains of the
probabilistic planning competition. Empirical results show that our methods can
solve problems where existing methods fail.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:03:40 GMT"
}
]
| 1,389,916,800,000 | [
[
"Lang",
"Tobias",
""
],
[
"Toussaint",
"Marc",
""
]
]
|
1401.3863 | Gerold J\"ager | Gerold J\"ager, Weixiong Zhang | An Effective Algorithm for and Phase Transitions of the Directed
Hamiltonian Cycle Problem | null | Journal Of Artificial Intelligence Research, Volume 39, pages
663-687, 2010 | 10.1613/jair.3109 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Hamiltonian cycle problem (HCP) is an important combinatorial problem
with applications in many areas. It is among the first problems used for
studying intrinsic properties, including phase transitions, of combinatorial
problems. While thorough theoretical and experimental analyses have been made
on the HCP in undirected graphs, a limited amount of work has been done for the
HCP in directed graphs (DHCP). The main contribution of this work is an
effective algorithm for the DHCP. Our algorithm explores and exploits the close
relationship between the DHCP and the Assignment Problem (AP) and utilizes a
technique based on Boolean satisfiability (SAT). By combining effective
algorithms for the AP and SAT, our algorithm significantly outperforms previous
exact DHCP algorithms, including an algorithm based on the award-winning
Concorde TSP algorithm. The second result of the current study is an
experimental analysis of phase transitions of the DHCP, verifying and refining
a known phase transition of the DHCP.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:04:57 GMT"
}
]
| 1,389,916,800,000 | [
[
"Jäger",
"Gerold",
""
],
[
"Zhang",
"Weixiong",
""
]
]
|
1401.3867 | Aaron Hunter | Aaron Hunter, James P. Delgrande | Iterated Belief Change Due to Actions and Observations | null | Journal Of Artificial Intelligence Research, Volume 40, pages
269-304, 2011 | 10.1613/jair.3132 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In action domains where agents may have erroneous beliefs, reasoning about
the effects of actions involves reasoning about belief change. In this paper,
we use a transition system approach to reason about the evolution of an agents
beliefs as actions are executed. Some actions cause an agent to perform belief
revision while others cause an agent to perform belief update, but the
interaction between revision and update can be non-elementary. We present a set
of rationality properties describing the interaction between revision and
update, and we introduce a new class of belief change operators for reasoning
about alternating sequences of revisions and updates. Our belief change
operators can be characterized in terms of a natural shifting operation on
total pre-orderings over interpretations. We compare our approach with related
work on iterated belief change due to action, and we conclude with some
directions for future research.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:06:49 GMT"
}
]
| 1,389,916,800,000 | [
[
"Hunter",
"Aaron",
""
],
[
"Delgrande",
"James P.",
""
]
]
|
1401.3872 | Christophe Lecoutre | Christophe Lecoutre, Stephane Cardon, Julien Vion | Second-Order Consistencies | null | Journal Of Artificial Intelligence Research, Volume 40, pages
175-219, 2011 | 10.1613/jair.3180 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a comprehensive study of second-order consistencies
(i.e., consistencies identifying inconsistent pairs of values) for constraint
satisfaction. We build a full picture of the relationships existing between
four basic second-order consistencies, namely path consistency (PC),
3-consistency (3C), dual consistency (DC) and 2-singleton arc consistency
(2SAC), as well as their conservative and strong variants. Interestingly, dual
consistency is an original property that can be established by using the
outcome of the enforcement of generalized arc consistency (GAC), which makes it
rather easy to obtain since constraint solvers typically maintain GAC during
search. On binary constraint networks, DC is equivalent to PC, but its
restriction to existing constraints, called conservative dual consistency
(CDC), is strictly stronger than traditional conservative consistencies derived
from path consistency, namely partial path consistency (PPC) and conservative
path consistency (CPC). After introducing a general algorithm to enforce strong
(C)DC, we present the results of an experimentation over a wide range of
benchmarks that demonstrate the interest of (conservative) dual consistency. In
particular, we show that enforcing (C)DC before search clearly improves the
performance of MAC (the algorithm that maintains GAC during search) on several
binary and non-binary structured problems.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:09:30 GMT"
}
]
| 1,389,916,800,000 | [
[
"Lecoutre",
"Christophe",
""
],
[
"Cardon",
"Stephane",
""
],
[
"Vion",
"Julien",
""
]
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.