id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1106.0251 | N. L. Zhang | N. L. Zhang, W. Zhang | Speeding Up the Convergence of Value Iteration in Partially Observable
Markov Decision Processes | null | Journal Of Artificial Intelligence Research, Volume 14, pages
29-51, 2001 | 10.1613/jair.761 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially observable Markov decision processes (POMDPs) have recently become
popular among many AI researchers because they serve as a natural model for
planning under uncertainty. Value iteration is a well-known algorithm for
finding optimal policies for POMDPs. It typically takes a large number of
iterations to converge. This paper proposes a method for accelerating the
convergence of value iteration. The method has been evaluated on an array of
benchmark problems and was found to be very effective: It enabled value
iteration to converge after only a few iterations on all the test problems.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:40:25 GMT"
}
] | 1,306,972,800,000 | [
[
"Zhang",
"N. L.",
""
],
[
"Zhang",
"W.",
""
]
] |
1106.0252 | A. Cimatti | A. Cimatti, M. Roveri | Conformant Planning via Symbolic Model Checking | null | Journal Of Artificial Intelligence Research, Volume 13, pages
305-338, 2000 | 10.1613/jair.774 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We tackle the problem of planning in nondeterministic domains, by presenting
a new approach to conformant planning. Conformant planning is the problem of
finding a sequence of actions that is guaranteed to achieve the goal despite
the nondeterminism of the domain. Our approach is based on the representation
of the planning domain as a finite state automaton. We use Symbolic Model
Checking techniques, in particular Binary Decision Diagrams, to compactly
represent and efficiently search the automaton. In this paper we make the
following contributions. First, we present a general planning algorithm for
conformant planning, which applies to fully nondeterministic domains, with
uncertainty in the initial condition and in action effects. The algorithm is
based on a breadth-first, backward search, and returns conformant plans of
minimal length, if a solution to the planning problem exists, otherwise it
terminates concluding that the problem admits no conformant solution. Second,
we provide a symbolic representation of the search space based on Binary
Decision Diagrams (BDDs), which is the basis for search techniques derived from
symbolic model checking. The symbolic representation makes it possible to
analyze potentially large sets of states and transitions in a single
computation step, thus providing for an efficient implementation. Third, we
present CMBP (Conformant Model Based Planner), an efficient implementation of
the data structures and algorithm described above, directly based on BDD
manipulations, which allows for a compact representation of the search layers
and an efficient implementation of the search steps. Finally, we present an
experimental comparison of our approach with the state-of-the-art conformant
planners CGP, QBFPLAN and GPT. Our analysis includes all the planning problems
from the distribution packages of these systems, plus other problems defined to
stress a number of specific factors. Our approach appears to be the most
effective: CMBP is strictly more expressive than QBFPLAN and CGP and, in all
the problems where a comparison is possible, CMBP outperforms its competitors,
sometimes by orders of magnitude.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:40:44 GMT"
}
] | 1,306,972,800,000 | [
[
"Cimatti",
"A.",
""
],
[
"Roveri",
"M.",
""
]
] |
1106.0253 | J. Cheng | J. Cheng, M. J. Druzdzel | AIS-BN: An Adaptive Importance Sampling Algorithm for Evidential
Reasoning in Large Bayesian Networks | null | Journal Of Artificial Intelligence Research, Volume 13, pages
155-188, 2000 | 10.1613/jair.764 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic sampling algorithms, while an attractive alternative to exact
algorithms in very large Bayesian network models, have been observed to perform
poorly in evidential reasoning with extremely unlikely evidence. To address
this problem, we propose an adaptive importance sampling algorithm, AIS-BN,
that shows promising convergence rates even under extreme conditions and seems
to outperform the existing sampling algorithms consistently. Three sources of
this performance improvement are (1) two heuristics for initialization of the
importance function that are based on the theoretical properties of importance
sampling in finite-dimensional integrals and the structural advantages of
Bayesian networks, (2) a smooth learning method for the importance function,
and (3) a dynamic weighting function for combining samples from different
stages of the algorithm. We tested the performance of the AIS-BN algorithm
along with two state of the art general purpose sampling algorithms, likelihood
weighting (Fung and Chang, 1989; Shachter and Peot, 1989) and self-importance
sampling (Shachter and Peot, 1989). We used in our tests three large real
Bayesian network models available to the scientific community: the CPCS network
(Pradhan et al., 1994), the PathFinder network (Heckerman, Horvitz, and
Nathwani, 1990), and the ANDES network (Conati, Gertner, VanLehn, and Druzdzel,
1997), with evidence as unlikely as 10^-41. While the AIS-BN algorithm always
performed better than the other two algorithms, in the majority of the test
cases it achieved orders of magnitude improvement in precision of the results.
Improvement in speed given a desired precision is even more dramatic, although
we are unable to report numerical results here, as the other algorithms almost
never achieved the precision reached even by the first few iterations of the
AIS-BN algorithm.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:40:57 GMT"
}
] | 1,306,972,800,000 | [
[
"Cheng",
"J.",
""
],
[
"Druzdzel",
"M. J.",
""
]
] |
1106.0254 | X. Chen | X. Chen, P. van Beek | Conflict-Directed Backjumping Revisited | null | Journal Of Artificial Intelligence Research, Volume 14, pages
53-81, 2001 | 10.1613/jair.788 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, many improvements to backtracking algorithms for solving
constraint satisfaction problems have been proposed. The techniques for
improving backtracking algorithms can be conveniently classified as look-ahead
schemes and look-back schemes. Unfortunately, look-ahead and look-back schemes
are not entirely orthogonal as it has been observed empirically that the
enhancement of look-ahead techniques is sometimes counterproductive to the
effects of look-back techniques. In this paper, we focus on the relationship
between the two most important look-ahead techniques---using a variable
ordering heuristic and maintaining a level of local consistency during the
backtracking search---and the look-back technique of conflict-directed
backjumping (CBJ). We show that there exists a "perfect" dynamic variable
ordering such that CBJ becomes redundant. We also show theoretically that as
the level of local consistency that is maintained in the backtracking search is
increased, the less that backjumping will be an improvement. Our theoretical
results partially explain why a backtracking algorithm doing more in the
look-ahead phase cannot benefit more from the backjumping look-back scheme.
Finally, we show empirically that adding CBJ to a backtracking algorithm that
maintains generalized arc consistency (GAC), an algorithm that we refer to as
GAC-CBJ, can still provide orders of magnitude speedups. Our empirical results
contrast with Bessiere and Regin's conclusion (1996) that CBJ is useless to an
algorithm that maintains arc consistency.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:41:13 GMT"
}
] | 1,306,972,800,000 | [
[
"Chen",
"X.",
""
],
[
"van Beek",
"P.",
""
]
] |
1106.0256 | J. M. Siskind | J. M. Siskind | Grounding the Lexical Semantics of Verbs in Visual Perception using
Force Dynamics and Event Logic | null | Journal Of Artificial Intelligence Research, Volume 15, pages
31-90, 2001 | 10.1613/jair.790 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an implemented system for recognizing the occurrence of
events described by simple spatial-motion verbs in short image sequences. The
semantics of these verbs is specified with event-logic expressions that
describe changes in the state of force-dynamic relations between the
participants of the event. An efficient finite representation is introduced for
the infinite sets of intervals that occur when describing liquid and
semi-liquid events. Additionally, an efficient procedure using this
representation is presented for inferring occurrences of compound events,
described with event-logic expressions, from occurrences of primitive events.
Using force dynamics and event logic to specify the lexical semantics of events
allows the system to be more robust than prior systems based on motion profile.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:41:31 GMT"
}
] | 1,306,972,800,000 | [
[
"Siskind",
"J. M.",
""
]
] |
1106.0257 | R. Maclin | R. Maclin, D. Opitz | Popular Ensemble Methods: An Empirical Study | null | Journal Of Artificial Intelligence Research, Volume 11, pages
169-198, 1999 | 10.1613/jair.614 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ensemble consists of a set of individually trained classifiers (such as
neural networks or decision trees) whose predictions are combined when
classifying novel instances. Previous research has shown that an ensemble is
often more accurate than any of the single classifiers in the ensemble. Bagging
(Breiman, 1996c) and Boosting (Freund and Shapire, 1996; Shapire, 1990) are two
relatively new but popular methods for producing ensembles. In this paper we
evaluate these methods on 23 data sets using both neural networks and decision
trees as our classification algorithm. Our results clearly indicate a number of
conclusions. First, while Bagging is almost always more accurate than a single
classifier, it is sometimes much less accurate than Boosting. On the other
hand, Boosting can create ensembles that are less accurate than a single
classifier -- especially when using neural networks. Analysis indicates that
the performance of the Boosting methods is dependent on the characteristics of
the data set being examined. In fact, further results show that Boosting
ensembles may overfit noisy data sets, thus decreasing its performance.
Finally, consistent with previous studies, our work suggests that most of the
gain in an ensemble's performance comes in the first few classifiers combined;
however, relatively large gains can be seen up to 25 classifiers when Boosting
decision trees.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 16:41:44 GMT"
}
] | 1,306,972,800,000 | [
[
"Maclin",
"R.",
""
],
[
"Opitz",
"D.",
""
]
] |
1106.0284 | E. F. Khor | E. F. Khor, T. H. Lee, R. Sathikannan, K. C. Tan | An Evolutionary Algorithm with Advanced Goal and Priority Specification
for Multi-objective Optimization | null | Journal Of Artificial Intelligence Research, Volume 18, pages
183-215, 2003 | 10.1613/jair.842 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an evolutionary algorithm with a new goal-sequence
domination scheme for better decision support in multi-objective optimization.
The approach allows the inclusion of advanced hard/soft priority and constraint
information on each objective component, and is capable of incorporating
multiple specifications with overlapping or non-overlapping objective functions
via logical 'OR' and 'AND' connectives to drive the search towards multiple
regions of trade-off. In addition, we propose a dynamic sharing scheme that is
simple and adaptively estimated according to the on-line population
distribution without needing any a priori parameter setting. Each feature in
the proposed algorithm is examined to show its respective contribution, and the
performance of the algorithm is compared with other evolutionary optimization
methods. It is shown that the proposed algorithm has performed well in the
diversity of evolutionary search and uniform distribution of non-dominated
individuals along the final trade-offs, without significant computational
effort. The algorithm is also applied to the design optimization of a practical
servo control system for hard disk drives with a single voice-coil-motor
actuator. Results of the evolutionary designed servo control system show a
superior closed-loop performance compared to classical PID or RPT approaches.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 19:15:16 GMT"
}
] | 1,306,972,800,000 | [
[
"Khor",
"E. F.",
""
],
[
"Lee",
"T. H.",
""
],
[
"Sathikannan",
"R.",
""
],
[
"Tan",
"K. C.",
""
]
] |
1106.0285 | I. Refanidis | I. Refanidis, I. Vlahavas | The GRT Planning System: Backward Heuristic Construction in Forward
State-Space Planning | null | Journal Of Artificial Intelligence Research, Volume 15, pages
115-161, 2001 | 10.1613/jair.893 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents GRT, a domain-independent heuristic planning system for
STRIPS worlds. GRT solves problems in two phases. In the pre-processing phase,
it estimates the distance between each fact and the goals of the problem, in a
backward direction. Then, in the search phase, these estimates are used in
order to further estimate the distance between each intermediate state and the
goals, guiding so the search process in a forward direction and on a best-first
basis. The paper presents the benefits from the adoption of opposite directions
between the preprocessing and the search phases, discusses some difficulties
that arise in the pre-processing phase and introduces techniques to cope with
them. Moreover, it presents several methods of improving the efficiency of the
heuristic, by enriching the representation and by reducing the size of the
problem. Finally, a method of overcoming local optimal states, based on domain
axioms, is proposed. According to it, difficult problems are decomposed into
easier sub-problems that have to be solved sequentially. The performance
results from various domains, including those of the recent planning
competitions, show that GRT is among the fastest planners.
| [
{
"version": "v1",
"created": "Wed, 1 Jun 2011 19:17:11 GMT"
}
] | 1,306,972,800,000 | [
[
"Refanidis",
"I.",
""
],
[
"Vlahavas",
"I.",
""
]
] |
1106.0664 | M. Cristani | M. Cristani | The Complexity of Reasoning about Spatial Congruence | null | Journal Of Artificial Intelligence Research, Volume 11, pages
361-390, 1999 | 10.1613/jair.641 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the recent literature of Artificial Intelligence, an intensive research
effort has been spent, for various algebras of qualitative relations used in
the representation of temporal and spatial knowledge, on the problem of
classifying the computational complexity of reasoning problems for subsets of
algebras. The main purpose of these researches is to describe a restricted set
of maximal tractable subalgebras, ideally in an exhaustive fashion with respect
to the hosting algebras. In this paper we introduce a novel algebra for
reasoning about Spatial Congruence, show that the satisfiability problem in the
spatial algebra MC-4 is NP-complete, and present a complete classification of
tractability in the algebra, based on the individuation of three maximal
tractable subclasses, one containing the basic relations. The three algebras
are formed by 14, 10 and 9 relations out of 16 which form the full algebra.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:51:33 GMT"
}
] | 1,307,318,400,000 | [
[
"Cristani",
"M.",
""
]
] |
1106.0665 | Jonathan Baxter | Jonathan Baxter and Peter L. Bartlett | Infinite-Horizon Policy-Gradient Estimation | null | Journal Of Artificial Intelligence Research, Volume 15, pages
319-350, 2001 | 10.1613/jair.806 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gradient-based approaches to direct policy search in reinforcement learning
have received much recent attention as a means to solve problems of partial
observability and to avoid some of the problems associated with policy
degradation in value-function methods. In this paper we introduce GPOMDP, a
simulation-based algorithm for generating a {\em biased} estimate of the
gradient of the {\em average reward} in Partially Observable Markov Decision
Processes (POMDPs) controlled by parameterized stochastic policies. A similar
algorithm was proposed by Kimura, Yamamura, and Kobayashi (1995). The
algorithm's chief advantages are that it requires storage of only twice the
number of policy parameters, uses one free parameter $\beta\in [0,1)$ (which
has a natural interpretation in terms of bias-variance trade-off), and requires
no knowledge of the underlying state. We prove convergence of GPOMDP, and show
how the correct choice of the parameter $\beta$ is related to the {\em mixing
time} of the controlled POMDP. We briefly describe extensions of GPOMDP to
controlled Markov chains, continuous state, observation and control spaces,
multiple-agents, higher-order derivatives, and a version for training
stochastic policies with internal states. In a companion paper (Baxter,
Bartlett, & Weaver, 2001) we show how the gradient estimates generated by
GPOMDP can be used in both a traditional stochastic gradient algorithm and a
conjugate-gradient procedure to find local optima of the average reward
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:52:01 GMT"
},
{
"version": "v2",
"created": "Fri, 15 Nov 2019 16:18:16 GMT"
}
] | 1,574,035,200,000 | [
[
"Baxter",
"Jonathan",
""
],
[
"Bartlett",
"Peter L.",
""
]
] |
1106.0667 | U. Straccia | U. Straccia | Reasoning within Fuzzy Description Logics | null | Journal Of Artificial Intelligence Research, Volume 14, pages
137-166, 2001 | 10.1613/jair.813 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Description Logics (DLs) are suitable, well-known, logics for managing
structured knowledge. They allow reasoning about individuals and well defined
concepts, i.e., set of individuals with common properties. The experience in
using DLs in applications has shown that in many cases we would like to extend
their capabilities. In particular, their use in the context of Multimedia
Information Retrieval (MIR) leads to the convincement that such DLs should
allow the treatment of the inherent imprecision in multimedia object content
representation and retrieval. In this paper we will present a fuzzy extension
of ALC, combining Zadeh's fuzzy logic with a classical DL. In particular,
concepts becomes fuzzy and, thus, reasoning about imprecise concepts is
supported. We will define its syntax, its semantics, describe its properties
and present a constraint propagation calculus for reasoning in it.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:52:49 GMT"
}
] | 1,307,318,400,000 | [
[
"Straccia",
"U.",
""
]
] |
1106.0668 | T. Elomaa | T. Elomaa, M. Kaariainen | An Analysis of Reduced Error Pruning | null | Journal Of Artificial Intelligence Research, Volume 15, pages
163-187, 2001 | 10.1613/jair.816 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Top-down induction of decision trees has been observed to suffer from the
inadequate functioning of the pruning phase. In particular, it is known that
the size of the resulting tree grows linearly with the sample size, even though
the accuracy of the tree does not improve. Reduced Error Pruning is an
algorithm that has been used as a representative technique in attempts to
explain the problems of decision tree learning. In this paper we present
analyses of Reduced Error Pruning in three different settings. First we study
the basic algorithmic properties of the method, properties that hold
independent of the input decision tree and pruning examples. Then we examine a
situation that intuitively should lead to the subtree under consideration to be
replaced by a leaf node, one in which the class label and attribute values of
the pruning examples are independent of each other. This analysis is conducted
under two different assumptions. The general analysis shows that the pruning
probability of a node fitting pure noise is bounded by a function that
decreases exponentially as the size of the tree grows. In a specific analysis
we assume that the examples are distributed uniformly to the tree. This
assumption lets us approximate the number of subtrees that are pruned because
they do not receive any pruning examples. This paper clarifies the different
variants of the Reduced Error Pruning algorithm, brings new insight to its
algorithmic properties, analyses the algorithm with less imposed assumptions
than before, and includes the previously overlooked empty subtrees to the
analysis.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:53:10 GMT"
}
] | 1,307,318,400,000 | [
[
"Elomaa",
"T.",
""
],
[
"Kaariainen",
"M.",
""
]
] |
1106.0669 | M. L. Ginsberg | M. L. Ginsberg | GIB: Imperfect Information in a Computationally Challenging Game | null | Journal Of Artificial Intelligence Research, Volume 14, pages
303-358, 2001 | 10.1613/jair.820 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the problems arising in the construction of a program
to play the game of contract bridge. These problems include both the difficulty
of solving the game's perfect information variant, and techniques needed to
address the fact that bridge is not, in fact, a perfect information game. GIB,
the program being described, involves five separate technical advances:
partition search, the practical application of Monte Carlo techniques to
realistic problems, a focus on achievable sets to solve problems inherent in
the Monte Carlo approach, an extension of alpha-beta pruning from total orders
to arbitrary distributive lattices, and the use of squeaky wheel optimization
to find approximately optimal solutions to cardplay problems. GIB is currently
believed to be of approximately expert caliber, and is currently the strongest
computer bridge program in the world.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:53:55 GMT"
}
] | 1,307,318,400,000 | [
[
"Ginsberg",
"M. L.",
""
]
] |
1106.0671 | C. Bessiere | C. Bessiere, R. Debruyne | Domain Filtering Consistencies | null | Journal Of Artificial Intelligence Research, Volume 14, pages
205-230, 2001 | 10.1613/jair.834 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enforcing local consistencies is one of the main features of constraint
reasoning. Which level of local consistency should be used when searching for
solutions in a constraint network is a basic question. Arc consistency and
partial forms of arc consistency have been widely studied, and have been known
for sometime through the forward checking or the MAC search algorithms. Until
recently, stronger forms of local consistency remained limited to those that
change the structure of the constraint graph, and thus, could not be used in
practice, especially on large networks. This paper focuses on the local
consistencies that are stronger than arc consistency, without changing the
structure of the network, i.e., only removing inconsistent values from the
domains. In the last five years, several such local consistencies have been
proposed by us or by others. We make an overview of all of them, and highlight
some relations between them. We compare them both theoretically and
experimentally, considering their pruning efficiency and the time required to
enforce them.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:54:17 GMT"
}
] | 1,307,318,400,000 | [
[
"Bessiere",
"C.",
""
],
[
"Debruyne",
"R.",
""
]
] |
1106.0672 | H. H. Bui | H. H. Bui, S. Venkatesh, G. West | Policy Recognition in the Abstract Hidden Markov Model | null | Journal Of Artificial Intelligence Research, Volume 17, pages
451-499, 2002 | 10.1613/jair.839 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a method for recognising an agent's behaviour in
dynamic, noisy, uncertain domains, and across multiple levels of abstraction.
We term this problem on-line plan recognition under uncertainty and view it
generally as probabilistic inference on the stochastic process representing the
execution of the agent's plan. Our contributions in this paper are twofold. In
terms of probabilistic inference, we introduce the Abstract Hidden Markov Model
(AHMM), a novel type of stochastic processes, provide its dynamic Bayesian
network (DBN) structure and analyse the properties of this network. We then
describe an application of the Rao-Blackwellised Particle Filter to the AHMM
which allows us to construct an efficient, hybrid inference method for this
model. In terms of plan recognition, we propose a novel plan recognition
framework based on the AHMM as the plan execution model. The Rao-Blackwellised
hybrid inference for AHMM can take advantage of the independence properties
inherent in a model of plan execution, leading to an algorithm for online
probabilistic plan recognition that scales well with the number of levels in
the plan hierarchy. This illustrates that while stochastic models for plan
execution can be complex, they exhibit special structures which, if exploited,
can lead to efficient plan recognition algorithms. We demonstrate the
usefulness of the AHMM framework via a behaviour recognition system in a
complex spatial environment using distributed video surveillance data.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:54:32 GMT"
}
] | 1,307,318,400,000 | [
[
"Bui",
"H. H.",
""
],
[
"Venkatesh",
"S.",
""
],
[
"West",
"G.",
""
]
] |
1106.0675 | J. Hoffmann | J. Hoffmann, B. Nebel | The FF Planning System: Fast Plan Generation Through Heuristic Search | null | Journal Of Artificial Intelligence Research, Volume 14, pages
253-302, 2001 | 10.1613/jair.855 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe and evaluate the algorithmic techniques that are used in the FF
planning system. Like the HSP system, FF relies on forward state space search,
using a heuristic that estimates goal distances by ignoring delete lists.
Unlike HSP's heuristic, our method does not assume facts to be independent. We
introduce a novel search strategy that combines hill-climbing with systematic
search, and we show how other powerful heuristic information can be extracted
and used to prune the search space. FF was the most successful automatic
planner at the recent AIPS-2000 planning competition. We review the results of
the competition, give data for other benchmark domains, and investigate the
reasons for the runtime performance of FF compared to HSP.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:55:02 GMT"
}
] | 1,307,318,400,000 | [
[
"Hoffmann",
"J.",
""
],
[
"Nebel",
"B.",
""
]
] |
1106.0678 | M. Kearns | M. Kearns, M. L. Littman, S. Singh, P. Stone | ATTac-2000: An Adaptive Autonomous Bidding Agent | null | Journal Of Artificial Intelligence Research, Volume 15, pages
189-206, 2001 | 10.1613/jair.865 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The First Trading Agent Competition (TAC) was held from June 22nd to July
8th, 2000. TAC was designed to create a benchmark problem in the complex domain
of e-marketplaces and to motivate researchers to apply unique approaches to a
common task. This article describes ATTac-2000, the first-place finisher in
TAC. ATTac-2000 uses a principled bidding strategy that includes several
elements of adaptivity. In addition to the success at the competition, isolated
empirical results are presented indicating the robustness and effectiveness of
ATTac-2000's adaptive strategy.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:55:42 GMT"
}
] | 1,307,318,400,000 | [
[
"Kearns",
"M.",
""
],
[
"Littman",
"M. L.",
""
],
[
"Singh",
"S.",
""
],
[
"Stone",
"P.",
""
]
] |
1106.0679 | B. Nebel | B. Nebel, J. Renz | Efficient Methods for Qualitative Spatial Reasoning | null | Journal Of Artificial Intelligence Research, Volume 15, pages
289-318, 2001 | 10.1613/jair.872 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The theoretical properties of qualitative spatial reasoning in the RCC8
framework have been analyzed extensively. However, no empirical investigation
has been made yet. Our experiments show that the adaption of the algorithms
used for qualitative temporal reasoning can solve large RCC8 instances, even if
they are in the phase transition region -- provided that one uses the maximal
tractable subsets of RCC8 that have been identified by us. In particular, we
demonstrate that the orthogonal combination of heuristic methods is successful
in solving almost all apparently hard instances in the phase transition region
up to a certain size in reasonable time.
| [
{
"version": "v1",
"created": "Fri, 3 Jun 2011 14:56:05 GMT"
}
] | 1,307,318,400,000 | [
[
"Nebel",
"B.",
""
],
[
"Renz",
"J.",
""
]
] |
1106.1510 | Alexander Shkotin | Alex Shkotin, Vladimir Ryakhovsky, Dmitry Kudryavtsev | Towards OWL-based Knowledge Representation in Petrology | 10 pages. The paper has been accepted by OWLED2011 as a long
presentation | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents our work on development of OWL-driven systems for formal
representation and reasoning about terminological knowledge and facts in
petrology. The long-term aim of our project is to provide solid foundations for
a large-scale integration of various kinds of knowledge, including basic terms,
rock classification algorithms, findings and reports. We describe three steps
we have taken towards that goal here. First, we develop a semi-automated
procedure for transforming a database of igneous rock samples to texts in a
controlled natural language (CNL), and then a collection of OWL ontologies.
Second, we create an OWL ontology of important petrology terms currently
described in natural language thesauri. We describe a prototype of a tool for
collecting definitions from domain experts. Third, we present an approach to
formalization of current industrial standards for classification of rock
samples, which requires linear equations in OWL 2. In conclusion, we discuss a
range of opportunities arising from the use of semantic technologies in
petrology and outline the future work in this area.
| [
{
"version": "v1",
"created": "Wed, 8 Jun 2011 07:01:59 GMT"
}
] | 1,307,577,600,000 | [
[
"Shkotin",
"Alex",
""
],
[
"Ryakhovsky",
"Vladimir",
""
],
[
"Kudryavtsev",
"Dmitry",
""
]
] |
1106.1796 | C. Drummond | C. Drummond | Accelerating Reinforcement Learning by Composing Solutions of
Automatically Identified Subtasks | null | Journal Of Artificial Intelligence Research, Volume 16, pages
59-104, 2002 | 10.1613/jair.904 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper discusses a system that accelerates reinforcement learning by
using transfer from related tasks. Without such transfer, even if two tasks are
very similar at some abstract level, an extensive re-learning effort is
required. The system achieves much of its power by transferring parts of
previously learned solutions rather than a single complete solution. The system
exploits strong features in the multi-dimensional function produced by
reinforcement learning in solving a particular task. These features are stable
and easy to recognize early in the learning process. They generate a
partitioning of the state space and thus the function. The partition is
represented as a graph. This is used to index and compose functions stored in a
case base to form a close approximation to the solution of the new task.
Experiments demonstrate that function composition often produces more than an
order of magnitude increase in learning rate compared to a basic reinforcement
learning algorithm.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:11:20 GMT"
}
] | 1,307,664,000,000 | [
[
"Drummond",
"C.",
""
]
] |
1106.1797 | Y. Kameya | T. Sato, Y. Kameya | Parameter Learning of Logic Programs for Symbolic-Statistical Modeling | null | Journal Of Artificial Intelligence Research, Volume 15, pages
391-454, 2001 | 10.1613/jair.912 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a logical/mathematical framework for statistical parameter
learning of parameterized logic programs, i.e. definite clause programs
containing probabilistic facts with a parameterized distribution. It extends
the traditional least Herbrand model semantics in logic programming to
distribution semantics, possible world semantics with a probability
distribution which is unconditionally applicable to arbitrary logic programs
including ones for HMMs, PCFGs and Bayesian networks. We also propose a new EM
algorithm, the graphical EM algorithm, that runs for a class of parameterized
logic programs representing sequential decision processes where each decision
is exclusive and independent. It runs on a new data structure called support
graphs describing the logical relationship between observations and their
explanations, and learns parameters by computing inside and outside probability
generalized for logic programs. The complexity analysis shows that when
combined with OLDT search for all explanations for observations, the graphical
EM algorithm, despite its generality, has the same time complexity as existing
EM algorithms, i.e. the Baum-Welch algorithm for HMMs, the Inside-Outside
algorithm for PCFGs, and the one for singly connected Bayesian networks that
have been developed independently in each research field. Learning experiments
with PCFGs using two corpora of moderate size indicate that the graphical EM
algorithm can significantly outperform the Inside-Outside algorithm.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:13:03 GMT"
}
] | 1,314,316,800,000 | [
[
"Sato",
"T.",
""
],
[
"Kameya",
"Y.",
""
]
] |
1106.1799 | C. Meek | C. Meek | Finding a Path is Harder than Finding a Tree | null | Journal Of Artificial Intelligence Research, Volume 15, pages
383-389, 2001 | 10.1613/jair.914 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I consider the problem of learning an optimal path graphical model from data
and show the problem to be NP-hard for the maximum likelihood and minimum
description length approaches and a Bayesian approach. This hardness result
holds despite the fact that the problem is a restriction of the polynomially
solvable problem of finding the optimal tree graphical model.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:13:51 GMT"
}
] | 1,307,664,000,000 | [
[
"Meek",
"C.",
""
]
] |
1106.1800 | J. F. Baget | J. F. Baget, M. L. Mugnier | Extensions of Simple Conceptual Graphs: the Complexity of Rules and
Constraints | null | Journal Of Artificial Intelligence Research, Volume 16, pages
425-465, 2002 | 10.1613/jair.918 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simple conceptual graphs are considered as the kernel of most knowledge
representation formalisms built upon Sowa's model. Reasoning in this model can
be expressed by a graph homomorphism called projection, whose semantics is
usually given in terms of positive, conjunctive, existential FOL. We present
here a family of extensions of this model, based on rules and constraints,
keeping graph homomorphism as the basic operation. We focus on the formal
definitions of the different models obtained, including their operational
semantics and relationships with FOL, and we analyze the decidability and
complexity of the associated problems (consistency and deduction). As soon as
rules are involved in reasonings, these problems are not decidable, but we
exhibit a condition under which they fall in the polynomial hierarchy. These
results extend and complete the ones already published by the authors. Moreover
we systematically study the complexity of some particular cases obtained by
restricting the form of constraints and/or rules.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:17:53 GMT"
}
] | 1,307,664,000,000 | [
[
"Baget",
"J. F.",
""
],
[
"Mugnier",
"M. L.",
""
]
] |
1106.1802 | F. Baader | F. Baader, C. Lutz, H. Sturm, F. Wolter | Fusions of Description Logics and Abstract Description Systems | null | Journal Of Artificial Intelligence Research, Volume 16, pages
1-58, 2002 | 10.1613/jair.919 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fusions are a simple way of combining logics. For normal modal logics,
fusions have been investigated in detail. In particular, it is known that,
under certain conditions, decidability transfers from the component logics to
their fusion. Though description logics are closely related to modal logics,
they are not necessarily normal. In addition, ABox reasoning in description
logics is not covered by the results from modal logics. In this paper, we
extend the decidability transfer results from normal modal logics to a large
class of description logics. To cover different description logics in a uniform
way, we introduce abstract description systems, which can be seen as a common
generalization of description and modal logics, and show the transfer results
in this general setting.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:18:57 GMT"
}
] | 1,307,664,000,000 | [
[
"Baader",
"F.",
""
],
[
"Lutz",
"C.",
""
],
[
"Sturm",
"H.",
""
],
[
"Wolter",
"F.",
""
]
] |
1106.1803 | H. Blockeel | H. Blockeel, L. Dehaspe, B. Demoen, G. Janssens, J. Ramon, H.
Vandecasteele | Improving the Efficiency of Inductive Logic Programming Through the Use
of Query Packs | null | Journal Of Artificial Intelligence Research, Volume 16, pages
135-166, 2002 | 10.1613/jair.924 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inductive logic programming, or relational learning, is a powerful paradigm
for machine learning or data mining. However, in order for ILP to become
practically useful, the efficiency of ILP systems must improve substantially.
To this end, the notion of a query pack is introduced: it structures sets of
similar queries. Furthermore, a mechanism is described for executing such query
packs. A complexity analysis shows that considerable efficiency improvements
can be achieved through the use of this query pack execution mechanism. This
claim is supported by empirical results obtained by incorporating support for
query pack execution in two existing learning systems.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:19:53 GMT"
}
] | 1,307,664,000,000 | [
[
"Blockeel",
"H.",
""
],
[
"Dehaspe",
"L.",
""
],
[
"Demoen",
"B.",
""
],
[
"Janssens",
"G.",
""
],
[
"Ramon",
"J.",
""
],
[
"Vandecasteele",
"H.",
""
]
] |
1106.1804 | E. Dahlman | E. Dahlman, A. E. Howe | A Critical Assessment of Benchmark Comparison in Planning | null | Journal Of Artificial Intelligence Research, Volume 17, pages
1-33, 2002 | 10.1613/jair.935 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent trends in planning research have led to empirical comparison becoming
commonplace. The field has started to settle into a methodology for such
comparisons, which for obvious practical reasons requires running a subset of
planners on a subset of problems. In this paper, we characterize the
methodology and examine eight implicit assumptions about the problems, planners
and metrics used in many of these comparisons. The problem assumptions are:
PR1) the performance of a general purpose planner should not be
penalized/biased if executed on a sampling of problems and domains, PR2) minor
syntactic differences in representation do not affect performance, and PR3)
problems should be solvable by STRIPS capable planners unless they require ADL.
The planner assumptions are: PL1) the latest version of a planner is the best
one to use, PL2) default parameter settings approximate good performance, and
PL3) time cut-offs do not unduly bias outcome. The metrics assumptions are: M1)
performance degrades similarly for each planner when run on degraded runtime
environments (e.g., machine platform) and M2) the number of plan steps
distinguishes performance. We find that most of these assumptions are not
supported empirically; in particular, that planners are affected differently by
these assumptions. We conclude with a call to the community to devote research
resources to improving the state of the practice and especially to enhancing
the available benchmark problems.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:20:39 GMT"
}
] | 1,307,664,000,000 | [
[
"Dahlman",
"E.",
""
],
[
"Howe",
"A. E.",
""
]
] |
1106.1813 | K. W. Bowyer | N. V. Chawla, K. W. Bowyer, L. O. Hall, W. P. Kegelmeyer | SMOTE: Synthetic Minority Over-sampling Technique | null | Journal Of Artificial Intelligence Research, Volume 16, pages
321-357, 2002 | 10.1613/jair.953 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An approach to the construction of classifiers from imbalanced datasets is
described. A dataset is imbalanced if the classification categories are not
approximately equally represented. Often real-world data sets are predominately
composed of "normal" examples with only a small percentage of "abnormal" or
"interesting" examples. It is also the case that the cost of misclassifying an
abnormal (interesting) example as a normal example is often much higher than
the cost of the reverse error. Under-sampling of the majority (normal) class
has been proposed as a good means of increasing the sensitivity of a classifier
to the minority class. This paper shows that a combination of our method of
over-sampling the minority (abnormal) class and under-sampling the majority
(normal) class can achieve better classifier performance (in ROC space) than
only under-sampling the majority class. This paper also shows that a
combination of our method of over-sampling the minority class and
under-sampling the majority class can achieve better classifier performance (in
ROC space) than varying the loss ratios in Ripper or class priors in Naive
Bayes. Our method of over-sampling the minority class involves creating
synthetic minority class examples. Experiments are performed using C4.5, Ripper
and a Naive Bayes classifier. The method is evaluated using the area under the
Receiver Operating Characteristic curve (AUC) and the ROC convex hull strategy.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:53:42 GMT"
}
] | 1,322,179,200,000 | [
[
"Chawla",
"N. V.",
""
],
[
"Bowyer",
"K. W.",
""
],
[
"Hall",
"L. O.",
""
],
[
"Kegelmeyer",
"W. P.",
""
]
] |
1106.1814 | H. Chan | H. Chan, A. Darwiche | When do Numbers Really Matter? | null | Journal Of Artificial Intelligence Research, Volume 17, pages
265-287, 2002 | 10.1613/jair.967 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common wisdom has it that small distinctions in the probabilities
(parameters) quantifying a belief network do not matter much for the results of
probabilistic queries. Yet, one can develop realistic scenarios under which
small variations in network parameters can lead to significant changes in
computed queries. A pending theoretical question is then to analytically
characterize parameter changes that do or do not matter. In this paper, we
study the sensitivity of probabilistic queries to changes in network parameters
and prove some tight bounds on the impact that such parameters can have on
queries. Our analytic results pinpoint some interesting situations under which
parameter changes do or do not matter. These results are important for
knowledge engineers as they help them identify influential network parameters.
They also help explain some of the previous experimental results and
observations with regards to network robustness against parameter changes.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:54:07 GMT"
}
] | 1,307,664,000,000 | [
[
"Chan",
"H.",
""
],
[
"Darwiche",
"A.",
""
]
] |
1106.1816 | G. A. Kaminka | G. A. Kaminka, D. V. Pynadath, M. Tambe | Monitoring Teams by Overhearing: A Multi-Agent Plan-Recognition Approach | null | Journal Of Artificial Intelligence Research, Volume 17, pages
83-135, 2002 | 10.1613/jair.970 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years are seeing an increasing need for on-line monitoring of teams of
cooperating agents, e.g., for visualization, or performance tracking. However,
in monitoring deployed teams, we often cannot rely on the agents to always
communicate their state to the monitoring system. This paper presents a
non-intrusive approach to monitoring by 'overhearing', where the monitored
team's state is inferred (via plan-recognition) from team-members' routine
communications, exchanged as part of their coordinated task execution, and
observed (overheard) by the monitoring system. Key challenges in this approach
include the demanding run-time requirements of monitoring, the scarceness of
observations (increasing monitoring uncertainty), and the need to scale-up
monitoring to address potentially large teams. To address these, we present a
set of complementary novel techniques, exploiting knowledge of the social
structures and procedures in the monitored team: (i) an efficient probabilistic
plan-recognition algorithm, well-suited for processing communications as
observations; (ii) an approach to exploiting knowledge of the team's social
behavior to predict future observations during execution (reducing monitoring
uncertainty); and (iii) monitoring algorithms that trade expressivity for
scalability, representing only certain useful monitoring hypotheses, but
allowing for any number of agents and their different activities to be
represented in a single coherent entity. We present an empirical evaluation of
these techniques, in combination and apart, in monitoring a deployed team of
agents, running on machines physically distributed across the country, and
engaged in complex, dynamic task execution. We also compare the performance of
these techniques to human expert and novice monitors, and show that the
techniques presented are capable of monitoring at human-expert levels, despite
the difficulty of the task.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:54:54 GMT"
}
] | 1,307,664,000,000 | [
[
"Kaminka",
"G. A.",
""
],
[
"Pynadath",
"D. V.",
""
],
[
"Tambe",
"M.",
""
]
] |
1106.1817 | A. Gorin | A. Gorin, I. Langkilde-Geary, M. A. Walker, J. Wright, H. Wright
Hastie | Automatically Training a Problematic Dialogue Predictor for a Spoken
Dialogue System | null | Journal Of Artificial Intelligence Research, Volume 16, pages
293-319, 2002 | 10.1613/jair.971 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spoken dialogue systems promise efficient and natural access to a large
variety of information sources and services from any phone. However, current
spoken dialogue systems are deficient in their strategies for preventing,
identifying and repairing problems that arise in the conversation. This paper
reports results on automatically training a Problematic Dialogue Predictor to
predict problematic human-computer dialogues using a corpus of 4692 dialogues
collected with the 'How May I Help You' (SM) spoken dialogue system. The
Problematic Dialogue Predictor can be immediately applied to the system's
decision of whether to transfer the call to a human customer care agent, or be
used as a cue to the system's dialogue manager to modify its behavior to repair
problems, and even perhaps, to prevent them. We show that a Problematic
Dialogue Predictor using automatically-obtainable features from the first two
exchanges in the dialogue can predict problematic dialogues 13.2% more
accurately than the baseline.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:55:26 GMT"
}
] | 1,307,664,000,000 | [
[
"Gorin",
"A.",
""
],
[
"Langkilde-Geary",
"I.",
""
],
[
"Walker",
"M. A.",
""
],
[
"Wright",
"J.",
""
],
[
"Hastie",
"H. Wright",
""
]
] |
1106.1818 | R. Nock | R. Nock | Inducing Interpretable Voting Classifiers without Trading Accuracy for
Simplicity: Theoretical Results, Approximation Algorithms | null | Journal Of Artificial Intelligence Research, Volume 17, pages
137-170, 2002 | 10.1613/jair.986 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in the study of voting classification algorithms have brought
empirical and theoretical results clearly showing the discrimination power of
ensemble classifiers. It has been previously argued that the search of this
classification power in the design of the algorithms has marginalized the need
to obtain interpretable classifiers. Therefore, the question of whether one
might have to dispense with interpretability in order to keep classification
strength is being raised in a growing number of machine learning or data mining
papers. The purpose of this paper is to study both theoretically and
empirically the problem. First, we provide numerous results giving insight into
the hardness of the simplicity-accuracy tradeoff for voting classifiers. Then
we provide an efficient "top-down and prune" induction heuristic, WIDC, mainly
derived from recent results on the weak learning and boosting frameworks. It is
to our knowledge the first attempt to build a voting classifier as a base
formula using the weak learning framework (the one which was previously highly
successful for decision tree induction), and not the strong learning framework
(as usual for such classifiers with boosting-like approaches). While it uses a
well-known induction scheme previously successful in other classes of concept
representations, thus making it easy to implement and compare, WIDC also relies
on recent or new results we give about particular cases of boosting known as
partition boosting and ranking loss boosting. Experimental results on
thirty-one domains, most of which readily available, tend to display the
ability of WIDC to produce small, accurate, and interpretable decision
committees.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:56:01 GMT"
}
] | 1,307,664,000,000 | [
[
"Nock",
"R.",
""
]
] |
1106.1819 | A. Darwiche | A. Darwiche, P. Marquis | A Knowledge Compilation Map | null | Journal Of Artificial Intelligence Research, Volume 17, pages
229-264, 2002 | 10.1613/jair.989 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a perspective on knowledge compilation which calls for analyzing
different compilation approaches according to two key dimensions: the
succinctness of the target compilation language, and the class of queries and
transformations that the language supports in polytime. We then provide a
knowledge compilation map, which analyzes a large number of existing target
compilation languages according to their succinctness and their polytime
transformations and queries. We argue that such analysis is necessary for
placing new compilation approaches within the context of existing ones. We also
go beyond classical, flat target compilation languages based on CNF and DNF,
and consider a richer, nested class based on directed acyclic graphs (such as
OBDDs), which we show to include a relatively large number of target
compilation languages.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:56:25 GMT"
}
] | 1,307,664,000,000 | [
[
"Darwiche",
"A.",
""
],
[
"Marquis",
"P.",
""
]
] |
1106.1820 | R. Barzilay | R. Barzilay, N. Elhadad | Inferring Strategies for Sentence Ordering in Multidocument News
Summarization | null | Journal Of Artificial Intelligence Research, Volume 17, pages
35-55, 2002 | 10.1613/jair.991 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of organizing information for multidocument summarization so that
the generated summary is coherent has received relatively little attention.
While sentence ordering for single document summarization can be determined
from the ordering of sentences in the input article, this is not the case for
multidocument summarization where summary sentences may be drawn from different
input articles. In this paper, we propose a methodology for studying the
properties of ordering information in the news genre and describe experiments
done on a corpus of multiple acceptable orderings we developed for the task.
Based on these experiments, we implemented a strategy for ordering information
that combines constraints from chronological order of events and topical
relatedness. Evaluation of our augmented algorithm shows a significant
improvement of the ordering over two baseline strategies.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:57:02 GMT"
}
] | 1,307,664,000,000 | [
[
"Barzilay",
"R.",
""
],
[
"Elhadad",
"N.",
""
]
] |
1106.1821 | K. Tumer | K. Tumer, D. H. Wolpert | Collective Intelligence, Data Routing and Braess' Paradox | null | Journal Of Artificial Intelligence Research, Volume 16, pages
359-387, 2002 | 10.1613/jair.995 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of designing the the utility functions of the
utility-maximizing agents in a multi-agent system so that they work
synergistically to maximize a global utility. The particular problem domain we
explore is the control of network routing by placing agents on all the routers
in the network. Conventional approaches to this task have the agents all use
the Ideal Shortest Path routing Algorithm (ISPA). We demonstrate that in many
cases, due to the side-effects of one agent's actions on another agent's
performance, having agents use ISPA's is suboptimal as far as global aggregate
cost is concerned, even when they are only used to route infinitesimally small
amounts of traffic. The utility functions of the individual agents are not
"aligned" with the global utility, intuitively speaking. As a particular
example of this we present an instance of Braess' paradox in which adding new
links to a network whose agents all use the ISPA results in a decrease in
overall throughput. We also demonstrate that load-balancing, in which the
agents' decisions are collectively made to optimize the global cost incurred by
all traffic currently being routed, is suboptimal as far as global cost
averaged across time is concerned. This is also due to 'side-effects', in this
case of current routing decision on future traffic. The mathematics of
Collective Intelligence (COIN) is concerned precisely with the issue of
avoiding such deleterious side-effects in multi-agent systems, both over time
and space. We present key concepts from that mathematics and use them to derive
an algorithm whose ideal version should have better performance than that of
having all agents use the ISPA, even in the infinitesimal limit. We present
experiments verifying this, and also showing that a machine-learning-based
version of this COIN algorithm in which costs are only imprecisely estimated
via empirical means (a version potentially applicable in the real world) also
outperforms the ISPA, despite having access to less information than does the
ISPA. In particular, this COIN algorithm almost always avoids Braess' paradox.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:57:43 GMT"
}
] | 1,307,664,000,000 | [
[
"Tumer",
"K.",
""
],
[
"Wolpert",
"D. H.",
""
]
] |
1106.1822 | C. Guestrin | C. Guestrin, D. Koller, R. Parr, S. Venkataraman | Efficient Solution Algorithms for Factored MDPs | null | Journal Of Artificial Intelligence Research, Volume 19, pages
399-468, 2003 | 10.1613/jair.1000 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the problem of planning under uncertainty in large
Markov Decision Processes (MDPs). Factored MDPs represent a complex state space
using state variables and the transition model using a dynamic Bayesian
network. This representation often allows an exponential reduction in the
representation size of structured MDPs, but the complexity of exact solution
algorithms for such MDPs can grow exponentially in the representation size. In
this paper, we present two approximate solution algorithms that exploit
structure in factored MDPs. Both use an approximate value function represented
as a linear combination of basis functions, where each basis function involves
only a small subset of the domain variables. A key contribution of this paper
is that it shows how the basic operations of both algorithms can be performed
efficiently in closed form, by exploiting both additive and context-specific
structure in a factored MDP. A central element of our algorithms is a novel
linear program decomposition technique, analogous to variable elimination in
Bayesian networks, which reduces an exponentially large LP to a provably
equivalent, polynomial-sized one. One algorithm uses approximate linear
programming, and the second approximate dynamic programming. Our dynamic
programming algorithm is novel in that it uses an approximation based on
max-norm, a technique that more directly minimizes the terms that appear in
error bounds for approximate MDP algorithms. We provide experimental results on
problems with over 10^40 states, demonstrating a promising indication of the
scalability of our approach, and compare our algorithm to an existing
state-of-the-art approach, showing, in some problems, exponential gains in
computation time.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 13:58:37 GMT"
}
] | 1,307,664,000,000 | [
[
"Guestrin",
"C.",
""
],
[
"Koller",
"D.",
""
],
[
"Parr",
"R.",
""
],
[
"Venkataraman",
"S.",
""
]
] |
1106.1853 | Ching-an Hsiao | Ching-an Hsiao and Xinchun Tian | Intelligent decision: towards interpreting the Pe Algorithm | 23pages, 12 figures, 7 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human intelligence lies in the algorithm, the nature of algorithm lies in
the classification, and the classification is equal to outlier detection. A lot
of algorithms have been proposed to detect outliers, meanwhile a lot of
definitions. Unsatisfying point is that definitions seem vague, which makes the
solution an ad hoc one. We analyzed the nature of outliers, and give two clear
definitions. We then develop an efficient RDD algorithm, which converts outlier
problem to pattern and degree problem. Furthermore, a collapse mechanism was
introduced by IIR algorithm, which can be united seamlessly with the RDD
algorithm and serve for the final decision. Both algorithms are originated from
the study on general AI. The combined edition is named as Pe algorithm, which
is the basis of the intelligent decision. Here we introduce longest k-turn
subsequence problem and corresponding solution as an example to interpret the
function of Pe algorithm in detecting curve-type outliers. We also give a
comparison between IIR algorithm and Pe algorithm, where we can get a better
understanding at both algorithms. A short discussion about intelligence is
added to demonstrate the function of the Pe algorithm. Related experimental
results indicate its robustness.
| [
{
"version": "v1",
"created": "Thu, 9 Jun 2011 16:45:49 GMT"
},
{
"version": "v2",
"created": "Fri, 10 Jun 2011 13:53:05 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Aug 2011 03:25:58 GMT"
}
] | 1,314,144,000,000 | [
[
"Hsiao",
"Ching-an",
""
],
[
"Tian",
"Xinchun",
""
]
] |
1106.1998 | Yi Sun | Yi Sun and Faustino Gomez and Tom Schaul and Juergen Schmidhuber | A Linear Time Natural Evolution Strategy for Non-Separable Functions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel Natural Evolution Strategy (NES) variant, the Rank-One NES
(R1-NES), which uses a low rank approximation of the search distribution
covariance matrix. The algorithm allows computation of the natural gradient
with cost linear in the dimensionality of the parameter space, and excels in
solving high-dimensional non-separable problems, including the best result to
date on the Rosenbrock function (512 dimensions).
| [
{
"version": "v1",
"created": "Fri, 10 Jun 2011 09:56:00 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2011 09:57:57 GMT"
}
] | 1,308,009,600,000 | [
[
"Sun",
"Yi",
""
],
[
"Gomez",
"Faustino",
""
],
[
"Schaul",
"Tom",
""
],
[
"Schmidhuber",
"Juergen",
""
]
] |
1106.2647 | Joseph Y. Halpern | Joseph Y. Halpern | From Causal Models To Counterfactual Structures | A preliminary version of this paper appears in the Proceedings of the
Twelfth International Conference on Principles of Knowledge Representation
and Reasoning (KR 2010), 2010.} | Review of Symbolic Logic 6:2, 2013, pp. 305--322 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Galles and Pearl claimed that "for recursive models, the causal model
framework does not add any restrictions to counterfactuals, beyond those
imposed by Lewis's [possible-worlds] framework." This claim is examined
carefully, with the goal of clarifying the exact relationship between causal
models and Lewis's framework. Recursive models are shown to correspond
precisely to a subclass of (possible-world) counterfactual structures. On the
other hand, a slight generalization of recursive models, models where all
equations have unique solutions, is shown to be incomparable in expressive
power to counterfactual structures, despite the fact that the Galles and Pearl
arguments should apply to them as well. The problem with the Galles and Pearl
argument is identified: an axiom that they viewed as irrelevant, because it
involved disjunction (which was not in their language), is not irrelevant at
all.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2011 09:34:05 GMT"
},
{
"version": "v2",
"created": "Sat, 17 Aug 2013 13:36:57 GMT"
}
] | 1,376,956,800,000 | [
[
"Halpern",
"Joseph Y.",
""
]
] |
1106.2652 | Joseph Y. Halpern | Joseph Y. Halpern and Christopher Hitchcock | Actual causation and the art of modeling | In <em>Heuristics, Probability and Causality: A Tribute to Judea
Pearl</em> (editors, R. Dechter, H. Geffner, and J. Y. Halpern), College
Publications, 2010, pp. 383-406 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We look more carefully at the modeling of causality using structural
equations. It is clear that the structural equations can have a major impact on
the conclusions we draw about causality. In particular, the choice of variables
and their values can also have a significant impact on causality. These choices
are, to some extent, subjective. We consider what counts as an appropriate
choice. More generally, we consider what makes a model an appropriate model,
especially if we want to take defaults into account, as was argued is necessary
in recent work.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2011 09:40:55 GMT"
}
] | 1,308,096,000,000 | [
[
"Halpern",
"Joseph Y.",
""
],
[
"Hitchcock",
"Christopher",
""
]
] |
1106.2692 | Nicolas Peltier | Vincent Aravantinos and Nicolas Peltier | Generating Schemata of Resolution Proofs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two distinct algorithms are presented to extract (schemata of) resolution
proofs from closed tableaux for propositional schemata. The first one handles
the most efficient version of the tableau calculus but generates very complex
derivations (denoted by rather elaborate rewrite systems). The second one has
the advantage that much simpler systems can be obtained, however the considered
proof procedure is less efficient.
| [
{
"version": "v1",
"created": "Tue, 14 Jun 2011 12:40:07 GMT"
}
] | 1,426,723,200,000 | [
[
"Aravantinos",
"Vincent",
""
],
[
"Peltier",
"Nicolas",
""
]
] |
1106.3361 | Miron Kursa | Miron B. Kursa and {\L}ukasz Komsta and Witold R. Rudnicki | Random forest models of the retention constants in the thin layer
chromatography | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the current study we examine an application of the machine learning
methods to model the retention constants in the thin layer chromatography
(TLC). This problem can be described with hundreds or even thousands of
descriptors relevant to various molecular properties, most of them redundant
and not relevant for the retention constant prediction. Hence we employed
feature selection to significantly reduce the number of attributes.
Additionally we have tested application of the bagging procedure to the feature
selection. The random forest regression models were built using selected
variables. The resulting models have better correlation with the experimental
data than the reference models obtained with linear regression. The
cross-validation confirms robustness of the models.
| [
{
"version": "v1",
"created": "Thu, 16 Jun 2011 22:05:21 GMT"
}
] | 1,308,528,000,000 | [
[
"Kursa",
"Miron B.",
""
],
[
"Komsta",
"Łukasz",
""
],
[
"Rudnicki",
"Witold R.",
""
]
] |
1106.3876 | Amandine Bellenger | Amandine Bellenger (LITIS), Sylvain Gatepaille | Uncertainty in Ontologies: Dempster-Shafer Theory for Data Fusion
Applications | Workshop on Theory of Belief Functions, Brest: France (2010) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays ontologies present a growing interest in Data Fusion applications.
As a matter of fact, the ontologies are seen as a semantic tool for describing
and reasoning about sensor data, objects, relations and general domain
theories. In addition, uncertainty is perhaps one of the most important
characteristics of the data and information handled by Data Fusion. However,
the fundamental nature of ontologies implies that ontologies describe only
asserted and veracious facts of the world. Different probabilistic, fuzzy and
evidential approaches already exist to fill this gap; this paper recaps the
most popular tools. However none of the tools meets exactly our purposes.
Therefore, we constructed a Dempster-Shafer ontology that can be imported into
any specific domain ontology and that enables us to instantiate it in an
uncertain manner. We also developed a Java application that enables reasoning
about these uncertain ontological instances.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2011 12:05:20 GMT"
}
] | 1,308,614,400,000 | [
[
"Bellenger",
"Amandine",
"",
"LITIS"
],
[
"Gatepaille",
"Sylvain",
""
]
] |
1106.3932 | Jean-Louis Dessalles | Jean-Louis J.-L. Dessalles (IC2) | Coincidences and the encounter problem: A formal account | 30th Annual Conference of the Cognitive Science Society, Washington :
United States (2008) | null | null | jld-08020201 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individuals have an intuitive perception of what makes a good coincidence.
Though the sensitivity to coincidences has often been presented as resulting
from an erroneous assessment of probability, it appears to be a genuine
competence, based on non-trivial computations. The model presented here
suggests that coincidences occur when subjects perceive complexity drops.
Co-occurring events are, together, simpler than if considered separately. This
model leads to a possible redefinition of subjective probability.
| [
{
"version": "v1",
"created": "Mon, 20 Jun 2011 15:05:53 GMT"
}
] | 1,308,614,400,000 | [
[
"Dessalles",
"Jean-Louis J. -L.",
"",
"IC2"
]
] |
1106.4218 | Walter Quattrociocchi | Francesca Giardini, Walter Quattrociocchi, Rosaria Conte | Rooting opinions in the minds: a cognitive model and a formal account of
opinions and their dynamics | null | SNAMAS 2011 : THIRD SOCIAL NETWORKS AND MULTIAGENT SYSTEMS
SYMPOSIUM SNAMAS@AISB 2011 | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The study of opinions, their formation and change, is one of the defining
topics addressed by social psychology, but in recent years other disciplines,
like computer science and complexity, have tried to deal with this issue.
Despite the flourishing of different models and theories in both fields,
several key questions still remain unanswered. The understanding of how
opinions change and the way they are affected by social influence are
challenging issues requiring a thorough analysis of opinion per se but also of
the way in which they travel between agents' minds and are modulated by these
exchanges. To account for the two-faceted nature of opinions, which are mental
entities undergoing complex social processes, we outline a preliminary model in
which a cognitive theory of opinions is put forward and it is paired with a
formal description of them and of their spreading among minds. Furthermore,
investigating social influence also implies the necessity to account for the
way in which people change their minds, as a consequence of interacting with
other people, and the need to explain the higher or lower persistence of such
changes.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2011 14:52:09 GMT"
}
] | 1,308,700,800,000 | [
[
"Giardini",
"Francesca",
""
],
[
"Quattrociocchi",
"Walter",
""
],
[
"Conte",
"Rosaria",
""
]
] |
1106.4221 | Walter Quattrociocchi | Francesca Giardini, Walter Quattrociocchi, Rosaria Conte | Understanding opinions. A cognitive and formal account | null | Cultural and opinion dynamics: Modeling, Experiments and
Challenges for the future @ ECCS 2011 | null | null | cs.AI | http://creativecommons.org/licenses/publicdomain/ | The study of opinions, their formation and change, is one of the defining
topics addressed by social psychology, but in recent years other disciplines,
as computer science and complexity, have addressed this challenge. Despite the
flourishing of different models and theories in both fields, several key
questions still remain unanswered. The aim of this paper is to challenge the
current theories on opinion by putting forward a cognitively grounded model
where opinions are described as specific mental representations whose main
properties are put forward. A comparison with reputation will be also
presented.
| [
{
"version": "v1",
"created": "Tue, 21 Jun 2011 15:00:33 GMT"
}
] | 1,308,700,800,000 | [
[
"Giardini",
"Francesca",
""
],
[
"Quattrociocchi",
"Walter",
""
],
[
"Conte",
"Rosaria",
""
]
] |
1106.4557 | F. Provost | F. Provost, G. M. Weiss | Learning When Training Data are Costly: The Effect of Class Distribution
on Tree Induction | null | Journal Of Artificial Intelligence Research, Volume 19, pages
315-354, 2003 | 10.1613/jair.1199 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For large, real-world inductive learning problems, the number of training
examples often must be limited due to the costs associated with procuring,
preparing, and storing the training examples and/or the computational costs
associated with learning from them. In such circumstances, one question of
practical importance is: if only n training examples can be selected, in what
proportion should the classes be represented? In this article we help to answer
this question by analyzing, for a fixed training-set size, the relationship
between the class distribution of the training data and the performance of
classification trees induced from these data. We study twenty-six data sets
and, for each, determine the best class distribution for learning. The
naturally occurring class distribution is shown to generally perform well when
classifier performance is evaluated using undifferentiated error rate (0/1
loss). However, when the area under the ROC curve is used to evaluate
classifier performance, a balanced distribution is shown to perform well. Since
neither of these choices for class distribution always generates the
best-performing classifier, we introduce a budget-sensitive progressive
sampling algorithm for selecting training examples based on the class
associated with each example. An empirical analysis of this algorithm shows
that the class distribution of the resulting training set yields classifiers
with good (nearly-optimal) classification performance.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:11:46 GMT"
}
] | 1,308,873,600,000 | [
[
"Provost",
"F.",
""
],
[
"Weiss",
"G. M.",
""
]
] |
1106.4561 | M. Fox | M. Fox, D. Long | PDDL2.1: An Extension to PDDL for Expressing Temporal Planning Domains | null | Journal Of Artificial Intelligence Research, Volume 20, pages
61-124, 2003 | 10.1613/jair.1129 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years research in the planning community has moved increasingly
toward s application of planners to realistic problems involving both time and
many typ es of resources. For example, interest in planning demonstrated by the
space res earch community has inspired work in observation scheduling,
planetary rover ex ploration and spacecraft control domains. Other temporal and
resource-intensive domains including logistics planning, plant control and
manufacturing have also helped to focus the community on the modelling and
reasoning issues that must be confronted to make planning technology meet the
challenges of application. The International Planning Competitions have acted
as an important motivating fo rce behind the progress that has been made in
planning since 1998. The third com petition (held in 2002) set the planning
community the challenge of handling tim e and numeric resources. This
necessitated the development of a modelling langua ge capable of expressing
temporal and numeric properties of planning domains. In this paper we describe
the language, PDDL2.1, that was used in the competition. We describe the syntax
of the language, its formal semantics and the validation of concurrent plans.
We observe that PDDL2.1 has considerable modelling power --- exceeding the
capabilities of current planning technology --- and presents a number of
important challenges to the research community.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:20:10 GMT"
}
] | 1,308,873,600,000 | [
[
"Fox",
"M.",
""
],
[
"Long",
"D.",
""
]
] |
1106.4569 | D. V. Pynadath | D. V. Pynadath, M. Tambe | The Communicative Multiagent Team Decision Problem: Analyzing Teamwork
Theories and Models | null | Journal Of Artificial Intelligence Research, Volume 16, pages
389-423, 2002 | 10.1613/jair.1024 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the significant progress in multiagent teamwork, existing research
does not address the optimality of its prescriptions nor the complexity of the
teamwork problem. Without a characterization of the optimality-complexity
tradeoffs, it is impossible to determine whether the assumptions and
approximations made by a particular theory gain enough efficiency to justify
the losses in overall performance. To provide a tool for use by multiagent
researchers in evaluating this tradeoff, we present a unified framework, the
COMmunicative Multiagent Team Decision Problem (COM-MTDP). The COM-MTDP model
combines and extends existing multiagent theories, such as decentralized
partially observable Markov decision processes and economic team theory. In
addition to their generality of representation, COM-MTDPs also support the
analysis of both the optimality of team performance and the computational
complexity of the agents' decision problem. In analyzing complexity, we present
a breakdown of the computational complexity of constructing optimal teams under
various classes of problem domains, along the dimensions of observability and
communication cost. In analyzing optimality, we exploit the COM-MTDP's ability
to encode existing teamwork theories and models to encode two instantiations of
joint intentions theory taken from the literature. Furthermore, the COM-MTDP
model provides a basis for the development of novel team coordination
algorithms. We derive a domain-independent criterion for optimal communication
and provide a comparative analysis of the two joint intentions instantiations
with respect to this optimal policy. We have implemented a reusable,
domain-independent software package based on COM-MTDPs to analyze teamwork
coordination strategies, and we demonstrate its use by encoding and evaluating
the two joint intentions strategies within an example domain.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:55:38 GMT"
}
] | 1,308,873,600,000 | [
[
"Pynadath",
"D. V.",
""
],
[
"Tambe",
"M.",
""
]
] |
1106.4573 | D. V. Pynadath | D. V. Pynadath, P. Scerri, M. Tambe | Towards Adjustable Autonomy for the Real World | null | Journal Of Artificial Intelligence Research, Volume 17, pages
171-228, 2002 | 10.1613/jair.1037 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adjustable autonomy refers to entities dynamically varying their own
autonomy, transferring decision-making control to other entities (typically
agents transferring control to human users) in key situations. Determining
whether and when such transfers-of-control should occur is arguably the
fundamental research problem in adjustable autonomy. Previous work has
investigated various approaches to addressing this problem but has often
focused on individual agent-human interactions. Unfortunately, domains
requiring collaboration between teams of agents and humans reveal two key
shortcomings of these previous approaches. First, these approaches use rigid
one-shot transfers of control that can result in unacceptable coordination
failures in multiagent settings. Second, they ignore costs (e.g., in terms of
time delays or effects on actions) to an agent's team due to such
transfers-of-control. To remedy these problems, this article presents a novel
approach to adjustable autonomy, based on the notion of a transfer-of-control
strategy. A transfer-of-control strategy consists of a conditional sequence of
two types of actions: (i) actions to transfer decision-making control (e.g.,
from an agent to a user or vice versa) and (ii) actions to change an agent's
pre-specified coordination constraints with team members, aimed at minimizing
miscoordination costs. The goal is for high-quality individual decisions to be
made with minimal disruption to the coordination of the team. We present a
mathematical model of transfer-of-control strategies. The model guides and
informs the operationalization of the strategies using Markov Decision
Processes, which select an optimal strategy, given an uncertain environment and
costs to the individuals and teams. The approach has been carefully evaluated,
including via its use in a real-world, deployed multi-agent system that assists
a research group in its daily activities.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:58:48 GMT"
}
] | 1,308,873,600,000 | [
[
"Pynadath",
"D. V.",
""
],
[
"Scerri",
"P.",
""
],
[
"Tambe",
"M.",
""
]
] |
1106.4575 | J. Culberson | J. Culberson, Y. Gao | An Analysis of Phase Transition in NK Landscapes | null | Journal Of Artificial Intelligence Research, Volume 17, pages
309-332, 2002 | 10.1613/jair.1081 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we analyze the decision version of the NK landscape model from
the perspective of threshold phenomena and phase transitions under two random
distributions, the uniform probability model and the fixed ratio model. For the
uniform probability model, we prove that the phase transition is easy in the
sense that there is a polynomial algorithm that can solve a random instance of
the problem with the probability asymptotic to 1 as the problem size tends to
infinity. For the fixed ratio model, we establish several upper bounds for the
solubility threshold, and prove that random instances with parameters above
these upper bounds can be solved polynomially. This, together with our
empirical study for random instances generated below and in the phase
transition region, suggests that the phase transition of the fixed ratio model
is also easy.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:59:27 GMT"
}
] | 1,308,873,600,000 | [
[
"Culberson",
"J.",
""
],
[
"Gao",
"Y.",
""
]
] |
1106.4576 | D. Gamberger | D. Gamberger, N. Lavrac | Expert-Guided Subgroup Discovery: Methodology and Application | null | Journal Of Artificial Intelligence Research, Volume 17, pages
501-527, 2002 | 10.1613/jair.1089 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an approach to expert-guided subgroup discovery. The main
step of the subgroup discovery process, the induction of subgroup descriptions,
is performed by a heuristic beam search algorithm, using a novel parametrized
definition of rule quality which is analyzed in detail. The other important
steps of the proposed subgroup discovery process are the detection of
statistically significant properties of selected subgroups and subgroup
visualization: statistically significant properties are used to enrich the
descriptions of induced subgroups, while the visualization shows subgroup
properties in the form of distributions of the numbers of examples in the
subgroups. The approach is illustrated by the results obtained for a medical
problem of early detection of patient risk groups.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 20:59:50 GMT"
}
] | 1,308,873,600,000 | [
[
"Gamberger",
"D.",
""
],
[
"Lavrac",
"N.",
""
]
] |
1106.4578 | J. Lang | J. Lang, P. Liberatore, P. Marquis | Propositional Independence - Formula-Variable Independence and
Forgetting | null | Journal Of Artificial Intelligence Research, Volume 18, pages
391-443, 2003 | 10.1613/jair.1113 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Independence -- the study of what is relevant to a given problem of reasoning
-- has received an increasing attention from the AI community. In this paper,
we consider two basic forms of independence, namely, a syntactic one and a
semantic one. We show features and drawbacks of them. In particular, while the
syntactic form of independence is computationally easy to check, there are
cases in which things that intuitively are not relevant are not recognized as
such. We also consider the problem of forgetting, i.e., distilling from a
knowledge base only the part that is relevant to the set of queries constructed
from a subset of the alphabet. While such process is computationally hard, it
allows for a simplification of subsequent reasoning, and can thus be viewed as
a form of compilation: once the relevant part of a knowledge base has been
extracted, all reasoning tasks to be performed can be simplified.
| [
{
"version": "v1",
"created": "Wed, 22 Jun 2011 21:01:16 GMT"
}
] | 1,308,873,600,000 | [
[
"Lang",
"J.",
""
],
[
"Liberatore",
"P.",
""
],
[
"Marquis",
"P.",
""
]
] |
1106.4863 | A. T. Cemgil | A. T. Cemgil, B. Kappen | Monte Carlo Methods for Tempo Tracking and Rhythm Quantization | null | Journal Of Artificial Intelligence Research, Volume 18, pages
45-81, 2003 | 10.1613/jair.1121 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a probabilistic generative model for timing deviations in
expressive music performance. The structure of the proposed model is equivalent
to a switching state space model. The switch variables correspond to discrete
note locations as in a musical score. The continuous hidden variables denote
the tempo. We formulate two well known music recognition problems, namely tempo
tracking and automatic transcription (rhythm quantization) as filtering and
maximum a posteriori (MAP) state estimation tasks. Exact computation of
posterior features such as the MAP state is intractable in this model class, so
we introduce Monte Carlo methods for integration and optimization. We compare
Markov Chain Monte Carlo (MCMC) methods (such as Gibbs sampling, simulated
annealing and iterative improvement) and sequential Monte Carlo methods
(particle filters). Our simulation results suggest better results with
sequential methods. The methods can be applied in both online and batch
scenarios such as tempo tracking and transcription and are thus potentially
useful in a number of music applications such as adaptive automatic
accompaniment, score typesetting and music information retrieval.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:56:05 GMT"
}
] | 1,309,132,800,000 | [
[
"Cemgil",
"A. T.",
""
],
[
"Kappen",
"B.",
""
]
] |
1106.4864 | D. Poole | D. Poole, N. L. Zhang | Exploiting Contextual Independence In Probabilistic Inference | null | Journal Of Artificial Intelligence Research, Volume 18, pages
263-313, 2003 | 10.1613/jair.1122 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian belief networks have grown to prominence because they provide
compact representations for many problems for which probabilistic inference is
appropriate, and there are algorithms to exploit this compactness. The next
step is to allow compact representations of the conditional probabilities of a
variable given its parents. In this paper we present such a representation that
exploits contextual independence in terms of parent contexts; which variables
act as parents may depend on the value of other variables. The internal
representation is in terms of contextual factors (confactors) that is simply a
pair of a context and a table. The algorithm, contextual variable elimination,
is based on the standard variable elimination algorithm that eliminates the
non-query variables in turn, but when eliminating a variable, the tables that
need to be multiplied can depend on the context. This algorithm reduces to
standard variable elimination when there is no contextual independence
structure to exploit. We show how this can be much more efficient than variable
elimination when there is structure to exploit. We explain why this new method
can exploit more structure than previous methods for structured belief network
inference and an analogous algorithm that uses trees.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:56:26 GMT"
}
] | 1,309,132,800,000 | [
[
"Poole",
"D.",
""
],
[
"Zhang",
"N. L.",
""
]
] |
1106.4865 | B. Kappen | B. Kappen, M. Leisink | Bound Propagation | null | Journal Of Artificial Intelligence Research, Volume 19, pages
139-154, 2003 | 10.1613/jair.1130 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article we present an algorithm to compute bounds on the marginals of
a graphical model. For several small clusters of nodes upper and lower bounds
on the marginal values are computed independently of the rest of the network.
The range of allowed probability distributions over the surrounding nodes is
restricted using earlier computed bounds. As we will show, this can be
considered as a set of constraints in a linear programming problem of which the
objective function is the marginal probability of the center nodes. In this way
knowledge about the maginals of neighbouring clusters is passed to other
clusters thereby tightening the bounds on their marginals. We show that sharp
bounds can be obtained for undirected and directed graphs that are used for
practical applications, but for which exact computations are infeasible.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:56:48 GMT"
}
] | 1,309,132,800,000 | [
[
"Kappen",
"B.",
""
],
[
"Leisink",
"M.",
""
]
] |
1106.4866 | P. Liberatore | P. Liberatore | On Polynomial Sized MDP Succinct Policies | null | Journal Of Artificial Intelligence Research, Volume 21, pages
551-577, 2004 | 10.1613/jair.1134 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policies of Markov Decision Processes (MDPs) determine the next action to
execute from the current state and, possibly, the history (the past states).
When the number of states is large, succinct representations are often used to
compactly represent both the MDPs and the policies in a reduced amount of
space. In this paper, some problems related to the size of succinctly
represented policies are analyzed. Namely, it is shown that some MDPs have
policies that can only be represented in space super-polynomial in the size of
the MDP, unless the polynomial hierarchy collapses. This fact motivates the
study of the problem of deciding whether a given MDP has a policy of a given
size and reward. Since some algorithms for MDPs work by finding a succinct
representation of the value function, the problem of deciding the existence of
a succinct representation of a value function of a given size and reward is
also considered.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:57:19 GMT"
}
] | 1,309,132,800,000 | [
[
"Liberatore",
"P.",
""
]
] |
1106.4867 | F. Lin | F. Lin | Compiling Causal Theories to Successor State Axioms and STRIPS-Like
Systems | null | Journal Of Artificial Intelligence Research, Volume 19, pages
279-314, 2003 | 10.1613/jair.1135 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a system for specifying the effects of actions. Unlike those
commonly used in AI planning, our system uses an action description language
that allows one to specify the effects of actions using domain rules, which are
state constraints that can entail new action effects from old ones.
Declaratively, an action domain in our language corresponds to a nonmonotonic
causal theory in the situation calculus. Procedurally, such an action domain is
compiled into a set of logical theories, one for each action in the domain,
from which fully instantiated successor state-like axioms and STRIPS-like
systems are then generated. We expect the system to be a useful tool for
knowledge engineers writing action specifications for classical AI planning
systems, GOLOG systems, and other systems where formal specifications of
actions are needed.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:57:41 GMT"
}
] | 1,309,132,800,000 | [
[
"Lin",
"F.",
""
]
] |
1106.4868 | R. G. Simmons | R. G. Simmons, H. L.S. Younes | VHPOP: Versatile Heuristic Partial Order Planner | null | Journal Of Artificial Intelligence Research, Volume 20, pages
405-430, 2003 | 10.1613/jair.1136 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | VHPOP is a partial order causal link (POCL) planner loosely based on UCPOP.
It draws from the experience gained in the early to mid 1990's on flaw
selection strategies for POCL planning, and combines this with more recent
developments in the field of domain independent planning such as distance based
heuristics and reachability analysis. We present an adaptation of the additive
heuristic for plan space planning, and modify it to account for possible reuse
of existing actions in a plan. We also propose a large set of novel flaw
selection strategies, and show how these can help us solve more problems than
previously possible by POCL planners. VHPOP also supports planning with
durative actions by incorporating standard techniques for temporal constraint
reasoning. We demonstrate that the same heuristic techniques used to boost the
performance of classical POCL planning can be effective in domains with
durative actions as well. The result is a versatile heuristic POCL planner
competitive with established CSP-based and heuristic state space planners.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:58:05 GMT"
}
] | 1,309,132,800,000 | [
[
"Simmons",
"R. G.",
""
],
[
"Younes",
"H. L. S.",
""
]
] |
1106.4869 | T. C. Au | T. C. Au, O. Ilghami, U. Kuter, J. W. Murdock, D. S. Nau, D. Wu, F.
Yaman | SHOP2: An HTN Planning System | null | Journal Of Artificial Intelligence Research, Volume 20, pages
379-404, 2003 | 10.1613/jair.1141 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SHOP2 planning system received one of the awards for distinguished
performance in the 2002 International Planning Competition. This paper
describes the features of SHOP2 which enabled it to excel in the competition,
especially those aspects of SHOP2 that deal with temporal and metric planning
domains.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:58:42 GMT"
}
] | 1,309,132,800,000 | [
[
"Au",
"T. C.",
""
],
[
"Ilghami",
"O.",
""
],
[
"Kuter",
"U.",
""
],
[
"Murdock",
"J. W.",
""
],
[
"Nau",
"D. S.",
""
],
[
"Wu",
"D.",
""
],
[
"Yaman",
"F.",
""
]
] |
1106.4871 | J. E. Laird | J. E. Laird, R. E. Wray | An Architectural Approach to Ensuring Consistency in Hierarchical
Execution | null | Journal Of Artificial Intelligence Research, Volume 19, pages
355-398, 2003 | 10.1613/jair.1142 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical task decomposition is a method used in many agent systems to
organize agent knowledge. This work shows how the combination of a hierarchy
and persistent assertions of knowledge can lead to difficulty in maintaining
logical consistency in asserted knowledge. We explore the problematic
consequences of persistent assumptions in the reasoning process and introduce
novel potential solutions. Having implemented one of the possible solutions,
Dynamic Hierarchical Justification, its effectiveness is demonstrated with an
empirical analysis.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:59:16 GMT"
}
] | 1,309,132,800,000 | [
[
"Laird",
"J. E.",
""
],
[
"Wray",
"R. E.",
""
]
] |
1106.4872 | C. A. Knoblock | C. A. Knoblock, K. Lerman, S. N. Minton | Wrapper Maintenance: A Machine Learning Approach | null | Journal Of Artificial Intelligence Research, Volume 18, pages
149-181, 2003 | 10.1613/jair.1145 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of online information sources has led to an increased use
of wrappers for extracting data from Web sources. While most of the previous
research has focused on quick and efficient generation of wrappers, the
development of tools for wrapper maintenance has received less attention. This
is an important research problem because Web sources often change in ways that
prevent the wrappers from extracting data correctly. We present an efficient
algorithm that learns structural information about data from positive examples
alone. We describe how this information can be used for two wrapper maintenance
applications: wrapper verification and reinduction. The wrapper verification
system detects when a wrapper is not extracting correct data, usually because
the Web source has changed its format. The reinduction algorithm automatically
recovers from changes in the Web source by identifying data on Web pages so
that a new wrapper may be generated for this source. To validate our approach,
we monitored 27 wrappers over a period of a year. The verification algorithm
correctly discovered 35 of the 37 wrapper changes, and made 16 mistakes,
resulting in precision of 0.73 and recall of 0.95. We validated the reinduction
algorithm on ten Web sources. We were able to successfully reinduce the
wrappers, obtaining precision and recall values of 0.90 and 0.80 on the data
extraction task.
| [
{
"version": "v1",
"created": "Fri, 24 Jun 2011 00:59:47 GMT"
}
] | 1,309,132,800,000 | [
[
"Knoblock",
"C. A.",
""
],
[
"Lerman",
"K.",
""
],
[
"Minton",
"S. N.",
""
]
] |
1106.5111 | Walter Quattrociocchi | Walter Quattrociocchi and Rosaria Conte | Exploiting Reputation in Distributed Virtual Environments | null | Essa 2011 - The 7th European Social Simulation Association
Conference | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cognitive research on reputation has shown several interesting properties
that can improve both the quality of services and the security in distributed
electronic environments. In this paper, the impact of reputation on
decision-making under scarcity of information will be shown. First, a cognitive
theory of reputation will be presented, then a selection of simulation
experimental results from different studies will be discussed. Such results
concern the benefits of reputation when agents need to find out good sellers in
a virtual market-place under uncertainty and informational cheating.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2011 08:40:48 GMT"
}
] | 1,309,219,200,000 | [
[
"Quattrociocchi",
"Walter",
""
],
[
"Conte",
"Rosaria",
""
]
] |
1106.5112 | Miron Kursa | Miron B. Kursa and Witold R. Rudnicki | The All Relevant Feature Selection using Random Forest | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we examine the application of the random forest classifier for
the all relevant feature selection problem. To this end we first examine two
recently proposed all relevant feature selection algorithms, both being a
random forest wrappers, on a series of synthetic data sets with varying size.
We show that reasonable accuracy of predictions can be achieved and that
heuristic algorithms that were designed to handle the all relevant problem,
have performance that is close to that of the reference ideal algorithm. Then,
we apply one of the algorithms to four families of semi-synthetic data sets to
assess how the properties of particular data set influence results of feature
selection. Finally we test the procedure using a well-known gene expression
data set. The relevance of nearly all previously established important genes
was confirmed, moreover the relevance of several new ones is discovered.
| [
{
"version": "v1",
"created": "Sat, 25 Jun 2011 08:47:23 GMT"
}
] | 1,309,219,200,000 | [
[
"Kursa",
"Miron B.",
""
],
[
"Rudnicki",
"Witold R.",
""
]
] |
1106.5256 | R. I. Brafman | R. I. Brafman, C. Domshlak | Structure and Complexity in Planning with Unary Operators | null | Journal Of Artificial Intelligence Research, Volume 18, pages
315-349, 2003 | 10.1613/jair.1146 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unary operator domains -- i.e., domains in which operators have a single
effect -- arise naturally in many control problems. In its most general form,
the problem of STRIPS planning in unary operator domains is known to be as hard
as the general STRIPS planning problem -- both are PSPACE-complete. However,
unary operator domains induce a natural structure, called the domain's causal
graph. This graph relates between the preconditions and effect of each domain
operator. Causal graphs were exploited by Williams and Nayak in order to
analyze plan generation for one of the controllers in NASA's Deep-Space One
spacecraft. There, they utilized the fact that when this graph is acyclic, a
serialization ordering over any subgoal can be obtained quickly. In this paper
we conduct a comprehensive study of the relationship between the structure of a
domain's causal graph and the complexity of planning in this domain. On the
positive side, we show that a non-trivial polynomial time plan generation
algorithm exists for domains whose causal graph induces a polytree with a
constant bound on its node indegree. On the negative side, we show that even
plan existence is hard when the graph is a directed-path singly connected DAG.
More generally, we show that the number of paths in the causal graph is closely
related to the complexity of planning in the associated domain. Finally we
relate our results to the question of complexity of planning with serializable
subgoals.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:01:50 GMT"
}
] | 1,309,219,200,000 | [
[
"Brafman",
"R. I.",
""
],
[
"Domshlak",
"C.",
""
]
] |
1106.5257 | T. Eiter | T. Eiter, W. Faber, N. Leone, G. Pfeifer, A. Polleres | Answer Set Planning Under Action Costs | null | Journal Of Artificial Intelligence Research, Volume 19, pages
25-71, 2003 | 10.1613/jair.1148 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, planning based on answer set programming has been proposed as an
approach towards realizing declarative planning systems. In this paper, we
present the language Kc, which extends the declarative planning language K by
action costs. Kc provides the notion of admissible and optimal plans, which are
plans whose overall action costs are within a given limit resp. minimum over
all plans (i.e., cheapest plans). As we demonstrate, this novel language allows
for expressing some nontrivial planning tasks in a declarative way.
Furthermore, it can be utilized for representing planning problems under other
optimality criteria, such as computing ``shortest'' plans (with the least
number of steps), and refinement combinations of cheapest and fastest plans. We
study complexity aspects of the language Kc and provide a transformation to
logic programs, such that planning problems are solved via answer set
programming. Furthermore, we report experimental results on selected problems.
Our experience is encouraging that answer set planning may be a valuable
approach to expressive planning systems in which intricate planning problems
can be naturally specified and solved.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:02:44 GMT"
}
] | 1,309,219,200,000 | [
[
"Eiter",
"T.",
""
],
[
"Faber",
"W.",
""
],
[
"Leone",
"N.",
""
],
[
"Pfeifer",
"G.",
""
],
[
"Polleres",
"A.",
""
]
] |
1106.5258 | R. I. Brafman | R. I. Brafman, M. Tennenholtz | Learning to Coordinate Efficiently: A Model-based Approach | null | Journal Of Artificial Intelligence Research, Volume 19, pages
11-23, 2003 | 10.1613/jair.1154 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In common-interest stochastic games all players receive an identical payoff.
Players participating in such games must learn to coordinate with each other in
order to receive the highest-possible value. A number of reinforcement learning
algorithms have been proposed for this problem, and some have been shown to
converge to good solutions in the limit. In this paper we show that using very
simple model-based algorithms, much better (i.e., polynomial) convergence rates
can be attained. Moreover, our model-based algorithms are guaranteed to
converge to the optimal value, unlike many of the existing algorithms.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:03:18 GMT"
}
] | 1,309,219,200,000 | [
[
"Brafman",
"R. I.",
""
],
[
"Tennenholtz",
"M.",
""
]
] |
1106.5260 | M. Do | M. Do, S. Kambhampati | SAPA: A Multi-objective Metric Temporal Planner | null | Journal Of Artificial Intelligence Research, Volume 20, pages
155-194, 2003 | 10.1613/jair.1156 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | SAPA is a domain-independent heuristic forward chaining planner that can
handle durative actions, metric resource constraints, and deadline goals. It is
designed to be capable of handling the multi-objective nature of metric
temporal planning. Our technical contributions include (i) planning-graph based
methods for deriving heuristics that are sensitive to both cost and makespan
(ii) techniques for adjusting the heuristic estimates to take action
interactions and metric resource limitations into account and (iii) a linear
time greedy post-processing technique to improve execution flexibility of the
solution plans. An implementation of SAPA using many of the techniques
presented in this paper was one of the best domain independent planners for
domains with metric and temporal constraints in the third International
Planning Competition, held at AIPS-02. We describe the technical details of
extracting the heuristics and present an empirical evaluation of the current
implementation of SAPA.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:03:40 GMT"
}
] | 1,309,219,200,000 | [
[
"Do",
"M.",
""
],
[
"Kambhampati",
"S.",
""
]
] |
1106.5261 | P. F. Patel-Schneider | P. F. Patel-Schneider, R. Sebastiani | A New General Method to Generate Random Modal Formulae for Testing
Decision Procedures | null | Journal Of Artificial Intelligence Research, Volume 18, pages
351-389, 2003 | 10.1613/jair.1166 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent emergence of heavily-optimized modal decision procedures has
highlighted the key role of empirical testing in this domain. Unfortunately,
the introduction of extensive empirical tests for modal logics is recent, and
so far none of the proposed test generators is very satisfactory. To cope with
this fact, we present a new random generation method that provides benefits
over previous methods for generating empirical tests. It fixes and much
generalizes one of the best-known methods, the random CNF_[]m test, allowing
for generating a much wider variety of problems, covering in principle the
whole input space. Our new method produces much more suitable test sets for the
current generation of modal decision procedures. We analyze the features of the
new method by means of an extensive collection of empirical tests.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:04:07 GMT"
}
] | 1,309,219,200,000 | [
[
"Patel-Schneider",
"P. F.",
""
],
[
"Sebastiani",
"R.",
""
]
] |
1106.5262 | S. Kambhampati | S. Kambhampati, R. Sanchez | AltAltp: Online Parallelization of Plans with Heuristic State Search | null | Journal Of Artificial Intelligence Research, Volume 19, pages
631-657, 2003 | 10.1613/jair.1168 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite their near dominance, heuristic state search planners still lag
behind disjunctive planners in the generation of parallel plans in classical
planning. The reason is that directly searching for parallel solutions in state
space planners would require the planners to branch on all possible subsets of
parallel actions, thus increasing the branching factor exponentially. We
present a variant of our heuristic state search planner AltAlt, called AltAltp
which generates parallel plans by using greedy online parallelization of
partial plans. The greedy approach is significantly informed by the use of
novel distance heuristics that AltAltp derives from a graphplan-style planning
graph for the problem. While this approach is not guaranteed to provide optimal
parallel plans, empirical results show that AltAltp is capable of generating
good quality parallel plans at a fraction of the cost incurred by the
disjunctive planners.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:04:32 GMT"
}
] | 1,309,219,200,000 | [
[
"Kambhampati",
"S.",
""
],
[
"Sanchez",
"R.",
""
]
] |
1106.5263 | B. Zanuttini | B. Zanuttini | New Polynomial Classes for Logic-Based Abduction | null | Journal Of Artificial Intelligence Research, Volume 19, pages
1-10, 2003 | 10.1613/jair.1170 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of propositional logic-based abduction, i.e., the
problem of searching for a best explanation for a given propositional
observation according to a given propositional knowledge base. We give a
general algorithm, based on the notion of projection; then we study
restrictions over the representations of the knowledge base and of the query,
and find new polynomial classes of abduction problems.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:04:51 GMT"
}
] | 1,309,219,200,000 | [
[
"Zanuttini",
"B.",
""
]
] |
1106.5265 | A. Gerevini | A. Gerevini, A. Saetti, I. Serina | Planning Through Stochastic Local Search and Temporal Action Graphs in
LPG | null | Journal Of Artificial Intelligence Research, Volume 20, pages
239-290, 2003 | 10.1613/jair.1183 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present some techniques for planning in domains specified with the recent
standard language PDDL2.1, supporting 'durative actions' and numerical
quantities. These techniques are implemented in LPG, a domain-independent
planner that took part in the 3rd International Planning Competition (IPC). LPG
is an incremental, any time system producing multi-criteria quality plans. The
core of the system is based on a stochastic local search method and on a
graph-based representation called 'Temporal Action Graphs' (TA-graphs). This
paper focuses on temporal planning, introducing TA-graphs and proposing some
techniques to guide the search in LPG using this representation. The
experimental results of the 3rd IPC, as well as further results presented in
this paper, show that our techniques can be very effective. Often LPG
outperforms all other fully-automated planners of the 3rd IPC in terms of speed
to derive a solution, or quality of the solutions that can be produced.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:05:34 GMT"
}
] | 1,309,219,200,000 | [
[
"Gerevini",
"A.",
""
],
[
"Saetti",
"A.",
""
],
[
"Serina",
"I.",
""
]
] |
1106.5266 | J. Kvarnstr\"om | J. Kvarnstr\"om, M. Magnusson | TALplanner in IPC-2002: Extensions and Control Rules | null | Journal Of Artificial Intelligence Research, Volume 20, pages
343-377, 2003 | 10.1613/jair.1189 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | TALplanner is a forward-chaining planner that relies on domain knowledge in
the shape of temporal logic formulas in order to prune irrelevant parts of the
search space. TALplanner recently participated in the third International
Planning Competition, which had a clear emphasis on increasing the complexity
of the problem domains being used as benchmark tests and the expressivity
required to represent these domains in a planning system. Like many other
planners, TALplanner had support for some but not all aspects of this increase
in expressivity, and a number of changes to the planner were required. After a
short introduction to TALplanner, this article describes some of the changes
that were made before and during the competition. We also describe the process
of introducing suitable domain knowledge for several of the competition
domains.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:06:29 GMT"
}
] | 1,309,219,200,000 | [
[
"Kvarnström",
"J.",
""
],
[
"Magnusson",
"M.",
""
]
] |
1106.5268 | L. Console | L. Console, C. Picardi, D. Theseider Dupr\`e | Temporal Decision Trees: Model-based Diagnosis of Dynamic Systems
On-Board | null | Journal Of Artificial Intelligence Research, Volume 19, pages
469-512, 2003 | 10.1613/jair.1194 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The automatic generation of decision trees based on off-line reasoning on
models of a domain is a reasonable compromise between the advantages of using a
model-based approach in technical domains and the constraints imposed by
embedded applications. In this paper we extend the approach to deal with
temporal information. We introduce a notion of temporal decision tree, which is
designed to make use of relevant information as long as it is acquired, and we
present an algorithm for compiling such trees from a model-based reasoning
system.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:07:43 GMT"
}
] | 1,309,219,200,000 | [
[
"Console",
"L.",
""
],
[
"Picardi",
"C.",
""
],
[
"Duprè",
"D. Theseider",
""
]
] |
1106.5269 | L. Finkelstein | L. Finkelstein, S. Markovitch, E. Rivlin | Optimal Schedules for Parallelizing Anytime Algorithms: The Case of
Shared Resources | null | Journal Of Artificial Intelligence Research, Volume 19, pages
73-138, 2003 | 10.1613/jair.1195 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The performance of anytime algorithms can be improved by simultaneously
solving several instances of algorithm-problem pairs. These pairs may include
different instances of a problem (such as starting from a different initial
state), different algorithms (if several alternatives exist), or several runs
of the same algorithm (for non-deterministic algorithms). In this paper we
present a methodology for designing an optimal scheduling policy based on the
statistical characteristics of the algorithms involved. We formally analyze the
case where the processes share resources (a single-processor model), and
provide an algorithm for optimal scheduling. We analyze, theoretically and
empirically, the behavior of our scheduling algorithm for various distribution
types. Finally, we present empirical results of applying our scheduling
algorithm to the Latin Square problem.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:08:20 GMT"
}
] | 1,309,219,200,000 | [
[
"Finkelstein",
"L.",
""
],
[
"Markovitch",
"S.",
""
],
[
"Rivlin",
"E.",
""
]
] |
1106.5270 | J. A. Csirik | J. A. Csirik, M. L. Littman, D. McAllester, R. E. Schapire, P. Stone | Decision-Theoretic Bidding Based on Learned Density Models in
Simultaneous, Interacting Auctions | null | Journal Of Artificial Intelligence Research, Volume 19, pages
209-242, 2003 | 10.1613/jair.1200 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auctions are becoming an increasingly popular method for transacting
business, especially over the Internet. This article presents a general
approach to building autonomous bidding agents to bid in multiple simultaneous
auctions for interacting goods. A core component of our approach learns a model
of the empirical price dynamics based on past data and uses the model to
analytically calculate, to the greatest extent possible, optimal bids. We
introduce a new and general boosting-based algorithm for conditional density
estimation problems of this kind, i.e., supervised learning problems in which
the goal is to estimate the entire conditional distribution of the real-valued
label. This approach is fully implemented as ATTac-2001, a top-scoring agent in
the second Trading Agent Competition (TAC-01). We present experiments
demonstrating the effectiveness of our boosting-based price predictor relative
to several reasonable alternatives.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:08:54 GMT"
}
] | 1,309,219,200,000 | [
[
"Csirik",
"J. A.",
""
],
[
"Littman",
"M. L.",
""
],
[
"McAllester",
"D.",
""
],
[
"Schapire",
"R. E.",
""
],
[
"Stone",
"P.",
""
]
] |
1106.5271 | J. Hoffmann | J. Hoffmann | The Metric-FF Planning System: Translating "Ignoring Delete Lists" to
Numeric State Variables | null | Journal Of Artificial Intelligence Research, Volume 20, pages
291-341, 2003 | 10.1613/jair.1144 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning with numeric state variables has been a challenge for many years,
and was a part of the 3rd International Planning Competition (IPC-3). Currently
one of the most popular and successful algorithmic techniques in STRIPS
planning is to guide search by a heuristic function, where the heuristic is
based on relaxing the planning task by ignoring the delete lists of the
available actions. We present a natural extension of ``ignoring delete lists''
to numeric state variables, preserving the relevant theoretical properties of
the STRIPS relaxation under the condition that the numeric task at hand is
``monotonic''. We then identify a subset of the numeric IPC-3 competition
language, ``linear tasks'', where monotonicity can be achieved by
pre-processing. Based on that, we extend the algorithms used in the heuristic
planning system FF to linear tasks. The resulting system Metric-FF is,
according to the IPC-3 results which we discuss, one of the two currently most
efficient numeric planners.
| [
{
"version": "v1",
"created": "Sun, 26 Jun 2011 21:09:14 GMT"
}
] | 1,309,219,200,000 | [
[
"Hoffmann",
"J.",
""
]
] |
1106.5312 | Nina Narodytska | Nina Narodytska, Toby Walsh, Lirong Xia | Manipulation of Nanson's and Baldwin's Rules | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nanson's and Baldwin's voting rules select a winner by successively
eliminating candidates with low Borda scores. We show that these rules have a
number of desirable computational properties. In particular, with unweighted
votes, it is NP-hard to manipulate either rule with one manipulator, whilst
with weighted votes, it is NP-hard to manipulate either rule with a small
number of candidates and a coalition of manipulators. As only a couple of other
voting rules are known to be NP-hard to manipulate with a single manipulator,
Nanson's and Baldwin's rules appear to be particularly resistant to
manipulation from a theoretical perspective. We also propose a number of
approximation methods for manipulating these two rules. Experiments demonstrate
that both rules are often difficult to manipulate in practice. These results
suggest that elimination style voting rules deserve further study.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2011 06:42:04 GMT"
}
] | 1,309,219,200,000 | [
[
"Narodytska",
"Nina",
""
],
[
"Walsh",
"Toby",
""
],
[
"Xia",
"Lirong",
""
]
] |
1106.5427 | You Xu | You Xu, Yixin Chen, Qiang Lu, Ruoyun Huang | Theory and Algorithms for Partial Order Based Reduction in Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Search is a major technique for planning. It amounts to exploring a state
space of planning domains typically modeled as a directed graph. However,
prohibitively large sizes of the search space make search expensive. Developing
better heuristic functions has been the main technique for improving search
efficiency. Nevertheless, recent studies have shown that improving heuristics
alone has certain fundamental limits on improving search efficiency. Recently,
a new direction of research called partial order based reduction (POR) has been
proposed as an alternative to improving heuristics. POR has shown promise in
speeding up searches.
POR has been extensively studied in model checking research and is a key
enabling technique for scalability of model checking systems. Although the POR
theory has been extensively studied in model checking, it has never been
developed systematically for planning before. In addition, the conditions for
POR in the model checking theory are abstract and not directly applicable in
planning. Previous works on POR algorithms for planning did not establish the
connection between these algorithms and existing theory in model checking.
In this paper, we develop a theory for POR in planning. The new theory we
develop connects the stubborn set theory in model checking and POR methods in
planning. We show that previous POR algorithms in planning can be explained by
the new theory. Based on the new theory, we propose a new, stronger POR
algorithm. Experimental results on various planning domains show further search
cost reduction using the new algorithm.
| [
{
"version": "v1",
"created": "Mon, 27 Jun 2011 16:06:27 GMT"
}
] | 1,309,219,200,000 | [
[
"Xu",
"You",
""
],
[
"Chen",
"Yixin",
""
],
[
"Lu",
"Qiang",
""
],
[
"Huang",
"Ruoyun",
""
]
] |
1106.5890 | Hiu Chun Woo | Yat-Chiu Law and Jimmy Ho-Man Lee and May Hiu-Chun Woo and Toby Walsh | A Comparison of Lex Bounds for Multiset Variables in Constraint
Programming | 7 pages, Proceedings of the Twenty-Fifth AAAI Conference on
Artificial Intelligence (AAAI-11) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Set and multiset variables in constraint programming have typically been
represented using subset bounds. However, this is a weak representation that
neglects potentially useful information about a set such as its cardinality.
For set variables, the length-lex (LL) representation successfully provides
information about the length (cardinality) and position in the lexicographic
ordering. For multiset variables, where elements can be repeated, we consider
richer representations that take into account additional information. We study
eight different representations in which we maintain bounds according to one of
the eight different orderings: length-(co)lex (LL/LC), variety-(co)lex (VL/VC),
length-variety-(co)lex (LVL/LVC), and variety-length-(co)lex (VLL/VLC)
orderings. These representations integrate together information about the
cardinality, variety (number of distinct elements in the multiset), and
position in some total ordering. Theoretical and empirical comparisons of
expressiveness and compactness of the eight representations suggest that
length-variety-(co)lex (LVL/LVC) and variety-length-(co)lex (VLL/VLC) usually
give tighter bounds after constraint propagation. We implement the eight
representations and evaluate them against the subset bounds representation with
cardinality and variety reasoning. Results demonstrate that they offer
significantly better pruning and runtime.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2011 09:57:43 GMT"
}
] | 1,309,392,000,000 | [
[
"Law",
"Yat-Chiu",
""
],
[
"Lee",
"Jimmy Ho-Man",
""
],
[
"Woo",
"May Hiu-Chun",
""
],
[
"Walsh",
"Toby",
""
]
] |
1106.5998 | M. Fox | M. Fox, D. Long | The 3rd International Planning Competition: Results and Analysis | null | Journal Of Artificial Intelligence Research, Volume 20, pages
1-59, 2003 | 10.1613/jair.1240 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports the outcome of the third in the series of biennial
international planning competitions, held in association with the International
Conference on AI Planning and Scheduling (AIPS) in 2002. In addition to
describing the domains, the planners and the objectives of the competition, the
paper includes analysis of the results. The results are analysed from several
perspectives, in order to address the questions of comparative performance
between planners, comparative difficulty of domains, the degree of agreement
between planners about the relative difficulty of individual problem instances
and the question of how well planners scale relative to one another over
increasingly difficult problems. The paper addresses these questions through
statistical analysis of the raw results of the competition, in order to
determine which results can be considered to be adequately supported by the
data. The paper concludes with a discussion of some challenges for the future
of the competition series.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2011 16:42:59 GMT"
}
] | 1,309,392,000,000 | [
[
"Fox",
"M.",
""
],
[
"Long",
"D.",
""
]
] |
1106.6022 | W. P. Birmingham | W. P. Birmingham, E. H. Durfee, S. Park | Use of Markov Chains to Design an Agent Bidding Strategy for Continuous
Double Auctions | null | Journal Of Artificial Intelligence Research, Volume 22, pages
175-214, 2004 | 10.1613/jair.1466 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As computational agents are developed for increasingly complicated e-commerce
applications, the complexity of the decisions they face demands advances in
artificial intelligence techniques. For example, an agent representing a seller
in an auction should try to maximize the seller's profit by reasoning about a
variety of possibly uncertain pieces of information, such as the maximum prices
various buyers might be willing to pay, the possible prices being offered by
competing sellers, the rules by which the auction operates, the dynamic arrival
and matching of offers to buy and sell, and so on. A naive application of
multiagent reasoning techniques would require the seller's agent to explicitly
model all of the other agents through an extended time horizon, rendering the
problem intractable for many realistically-sized problems. We have instead
devised a new strategy that an agent can use to determine its bid price based
on a more tractable Markov chain model of the auction process. We have
experimentally identified the conditions under which our new strategy works
well, as well as how well it works in comparison to the optimal performance the
agent could have achieved had it known the future. Our results show that our
new strategy in general performs well, outperforming other tractable heuristic
strategies in a majority of experiments, and is particularly effective in a
'seller?s market', where many buy offers are available.
| [
{
"version": "v1",
"created": "Wed, 29 Jun 2011 18:38:48 GMT"
}
] | 1,483,833,600,000 | [
[
"Birmingham",
"W. P.",
""
],
[
"Durfee",
"E. H.",
""
],
[
"Park",
"S.",
""
]
] |
1107.0018 | A. Al-Ani | A. Al-Ani, M. Deriche | A New Technique for Combining Multiple Classifiers using The
Dempster-Shafer Theory of Evidence | null | Journal Of Artificial Intelligence Research, Volume 17, pages
333-361, 2002 | 10.1613/jair.1026 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new classifier combination technique based on the
Dempster-Shafer theory of evidence. The Dempster-Shafer theory of evidence is a
powerful method for combining measures of evidence from different classifiers.
However, since each of the available methods that estimates the evidence of
classifiers has its own limitations, we propose here a new implementation which
adapts to training data so that the overall mean square error is minimized. The
proposed technique is shown to outperform most available classifier combination
methods when tested on three different classification problems.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:31:52 GMT"
}
] | 1,309,737,600,000 | [
[
"Al-Ani",
"A.",
""
],
[
"Deriche",
"M.",
""
]
] |
1107.0019 | S. Acid | S. Acid, L. M. de Campos | Searching for Bayesian Network Structures in the Space of Restricted
Acyclic Partially Directed Graphs | null | Journal Of Artificial Intelligence Research, Volume 18, pages
445-490, 2003 | 10.1613/jair.1061 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although many algorithms have been designed to construct Bayesian network
structures using different approaches and principles, they all employ only two
methods: those based on independence criteria, and those based on a scoring
function and a search procedure (although some methods combine the two). Within
the score+search paradigm, the dominant approach uses local search methods in
the space of directed acyclic graphs (DAGs), where the usual choices for
defining the elementary modifications (local changes) that can be applied are
arc addition, arc deletion, and arc reversal. In this paper, we propose a new
local search method that uses a different search space, and which takes account
of the concept of equivalence between network structures: restricted acyclic
partially directed graphs (RPDAGs). In this way, the number of different
configurations of the search space is reduced, thus improving efficiency.
Moreover, although the final result must necessarily be a local optimum given
the nature of the search method, the topology of the new search space, which
avoids making early decisions about the directions of the arcs, may help to
find better local optima than those obtained by searching in the DAG space.
Detailed results of the evaluation of the proposed search method on several
test problems, including the well-known Alarm Monitoring System, are also
presented.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:32:05 GMT"
}
] | 1,309,737,600,000 | [
[
"Acid",
"S.",
""
],
[
"de Campos",
"L. M.",
""
]
] |
1107.0020 | O. Grumberg | O. Grumberg, S. Livne, S. Markovitch | Learning to Order BDD Variables in Verification | null | Journal Of Artificial Intelligence Research, Volume 18, pages
83-116, 2003 | 10.1613/jair.1096 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The size and complexity of software and hardware systems have significantly
increased in the past years. As a result, it is harder to guarantee their
correct behavior. One of the most successful methods for automated verification
of finite-state systems is model checking. Most of the current model-checking
systems use binary decision diagrams (BDDs) for the representation of the
tested model and in the verification process of its properties. Generally, BDDs
allow a canonical compact representation of a boolean function (given an order
of its variables). The more compact the BDD is, the better performance one gets
from the verifier. However, finding an optimal order for a BDD is an
NP-complete problem. Therefore, several heuristic methods based on expert
knowledge have been developed for variable ordering. We propose an alternative
approach in which the variable ordering algorithm gains 'ordering experience'
from training models and uses the learned knowledge for finding good orders.
Our methodology is based on offline learning of pair precedence classifiers
from training models, that is, learning which variable pair permutation is more
likely to lead to a good order. For each training model, a number of training
sequences are evaluated. Every training model variable pair permutation is then
tagged based on its performance on the evaluated orders. The tagged
permutations are then passed through a feature extractor and are given as
examples to a classifier creation algorithm. Given a model for which an order
is requested, the ordering algorithm consults each precedence classifier and
constructs a pair precedence table which is used to create the order. Our
algorithm was integrated with SMV, which is one of the most widely used
verification systems. Preliminary empirical evaluation of our methodology,
using real benchmark models, shows performance that is better than random
ordering and is competitive with existing algorithms that use expert knowledge.
We believe that in sub-domains of models (alu, caches, etc.) our system will
prove even more valuable. This is because it features the ability to learn
sub-domain knowledge, something that no other ordering algorithm does.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:32:16 GMT"
}
] | 1,309,737,600,000 | [
[
"Grumberg",
"O.",
""
],
[
"Livne",
"S.",
""
],
[
"Markovitch",
"S.",
""
]
] |
1107.0021 | W. E. Walsh | W. E. Walsh, M. P. Wellman | Decentralized Supply Chain Formation: A Market Protocol and Competitive
Equilibrium Analysis | null | Journal Of Artificial Intelligence Research, Volume 19, pages
513-567, 2003 | 10.1613/jair.1213 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supply chain formation is the process of determining the structure and terms
of exchange relationships to enable a multilevel, multiagent production
activity. We present a simple model of supply chains, highlighting two
characteristic features: hierarchical subtask decomposition, and resource
contention. To decentralize the formation process, we introduce a market price
system over the resources produced along the chain. In a competitive
equilibrium for this system, agents choose locally optimal allocations with
respect to prices, and outcomes are optimal overall. To determine prices, we
define a market protocol based on distributed, progressive auctions, and
myopic, non-strategic agent bidding policies. In the presence of resource
contention, this protocol produces better solutions than the greedy protocols
common in the artificial intelligence and multiagent systems literature. The
protocol often converges to high-value supply chains, and when competitive
equilibria exist, typically to approximate competitive equilibria. However,
complementarities in agent production technologies can cause the protocol to
wastefully allocate inputs to agents that do not produce their outputs. A
subsequent decommitment phase recovers a significant fraction of the lost
surplus.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:32:28 GMT"
}
] | 1,309,737,600,000 | [
[
"Walsh",
"W. E.",
""
],
[
"Wellman",
"M. P.",
""
]
] |
1107.0023 | C. Boutilier | C. Boutilier, R. I. Brafman, C. Domshlak, H. H. Hoos, D. Poole | CP-nets: A Tool for Representing and Reasoning withConditional Ceteris
Paribus Preference Statements | null | Journal Of Artificial Intelligence Research, Volume 21, pages
135-191, 2004 | 10.1613/jair.1234 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information about user preferences plays a key role in automated decision
making. In many domains it is desirable to assess such preferences in a
qualitative rather than quantitative way. In this paper, we propose a
qualitative graphical representation of preferences that reflects conditional
dependence and independence of preference statements under a ceteris paribus
(all else being equal) interpretation. Such a representation is often compact
and arguably quite natural in many circumstances. We provide a formal semantics
for this model, and describe how the structure of the network can be exploited
in several inference tasks, such as determining whether one outcome dominates
(is preferred to) another, ordering a set outcomes according to the preference
relation, and constructing the best outcome subject to available evidence.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:32:52 GMT"
}
] | 1,309,737,600,000 | [
[
"Boutilier",
"C.",
""
],
[
"Brafman",
"R. I.",
""
],
[
"Domshlak",
"C.",
""
],
[
"Hoos",
"H. H.",
""
],
[
"Poole",
"D.",
""
]
] |
1107.0024 | A. Darwiche | A. Darwiche, J. D. Park | Complexity Results and Approximation Strategies for MAP Explanations | null | Journal Of Artificial Intelligence Research, Volume 21, pages
101-133, 2006 | 10.1613/jair.1236 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MAP is the problem of finding a most probable instantiation of a set of
variables given evidence. MAP has always been perceived to be significantly
harder than the related problems of computing the probability of a variable
instantiation Pr, or the problem of computing the most probable explanation
(MPE). This paper investigates the complexity of MAP in Bayesian networks.
Specifically, we show that MAP is complete for NP^PP and provide further
negative complexity results for algorithms based on variable elimination. We
also show that MAP remains hard even when MPE and Pr become easy. For example,
we show that MAP is NP-complete when the networks are restricted to polytrees,
and even then can not be effectively approximated. Given the difficulty of
computing MAP exactly, and the difficulty of approximating MAP while providing
useful guarantees on the resulting approximation, we investigate best effort
approximations. We introduce a generic MAP approximation framework. We provide
two instantiations of the framework; one for networks which are amenable to
exact inference Pr, and one for networks for which even exact inference is too
hard. This allows MAP approximation on networks that are too complex to even
exactly solve the easier problems, Pr and MPE. Experimental results indicate
that using these approximation algorithms provides much better solutions than
standard techniques, and provide accurate MAP estimates in many cases.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:33:03 GMT"
}
] | 1,309,737,600,000 | [
[
"Darwiche",
"A.",
""
],
[
"Park",
"J. D.",
""
]
] |
1107.0025 | S. Edelkamp | S. Edelkamp | Taming Numbers and Durations in the Model Checking Integrated Planning
System | null | Journal Of Artificial Intelligence Research, Volume 20, pages
195-238, 2003 | 10.1613/jair.1302 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Model Checking Integrated Planning System (MIPS) is a temporal least
commitment heuristic search planner based on a flexible object-oriented
workbench architecture. Its design clearly separates explicit and symbolic
directed exploration algorithms from the set of on-line and off-line computed
estimates and associated data structures. MIPS has shown distinguished
performance in the last two international planning competitions. In the last
event the description language was extended from pure propositional planning to
include numerical state variables, action durations, and plan quality objective
functions. Plans were no longer sequences of actions but time-stamped
schedules. As a participant of the fully automated track of the competition,
MIPS has proven to be a general system; in each track and every benchmark
domain it efficiently computed plans of remarkable quality. This article
introduces and analyzes the most important algorithmic novelties that were
necessary to tackle the new layers of expressiveness in the benchmark problems
and to achieve a high level of performance. The extensions include critical
path analysis of sequentially generated plans to generate corresponding optimal
parallel plans. The linear time algorithm to compute the parallel plan bypasses
known NP hardness results for partial ordering by scheduling plans with respect
to the set of actions and the imposed precedence relations. The efficiency of
this algorithm also allows us to improve the exploration guidance: for each
encountered planning state the corresponding approximate sequential plan is
scheduled. One major strength of MIPS is its static analysis phase that grounds
and simplifies parameterized predicates, functions and operators, that infers
knowledge to minimize the state description length, and that detects domain
object symmetries. The latter aspect is analyzed in detail. MIPS has been
developed to serve as a complete and optimal state space planner, with
admissible estimates, exploration engines and branching cuts. In the
competition version, however, certain performance compromises had to be made,
including floating point arithmetic, weighted heuristic search exploration
according to an inadmissible estimate and parameterized optimization.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:33:38 GMT"
}
] | 1,309,737,600,000 | [
[
"Edelkamp",
"S.",
""
]
] |
1107.0026 | M. J. Nederhof | M. J. Nederhof, G. Satta | IDL-Expressions: A Formalism for Representing and Parsing Finite
Languages in Natural Language Processing | null | Journal Of Artificial Intelligence Research, Volume 21, pages
287-317, 2004 | 10.1613/jair.1309 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a formalism for representation of finite languages, referred to as
the class of IDL-expressions, which combines concepts that were only considered
in isolation in existing formalisms. The suggested applications are in natural
language processing, more specifically in surface natural language generation
and in machine translation, where a sentence is obtained by first generating a
large set of candidate sentences, represented in a compact way, and then by
filtering such a set through a parser. We study several formal properties of
IDL-expressions and compare this new formalism with more standard ones. We also
present a novel parsing algorithm for IDL-expressions and prove a non-trivial
upper bound on its time complexity.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:33:50 GMT"
}
] | 1,309,737,600,000 | [
[
"Nederhof",
"M. J.",
""
],
[
"Satta",
"G.",
""
]
] |
1107.0027 | T. Kocka | T. Kocka, N. L. Zhang | Effective Dimensions of Hierarchical Latent Class Models | null | Journal Of Artificial Intelligence Research, Volume 21, pages
1-17, 2004 | 10.1613/jair.1311 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical latent class (HLC) models are tree-structured Bayesian networks
where leaf nodes are observed while internal nodes are latent. There are no
theoretically well justified model selection criteria for HLC models in
particular and Bayesian networks with latent nodes in general. Nonetheless,
empirical studies suggest that the BIC score is a reasonable criterion to use
in practice for learning HLC models. Empirical studies also suggest that
sometimes model selection can be improved if standard model dimension is
replaced with effective model dimension in the penalty term of the BIC score.
Effective dimensions are difficult to compute. In this paper, we prove a
theorem that relates the effective dimension of an HLC model to the effective
dimensions of a number of latent class models. The theorem makes it
computationally feasible to compute the effective dimensions of large HLC
models. The theorem can also be used to compute the effective dimensions of
general tree models.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:34:11 GMT"
}
] | 1,309,737,600,000 | [
[
"Kocka",
"T.",
""
],
[
"Zhang",
"N. L.",
""
]
] |
1107.0030 | O. Arieli | O. Arieli, M. Bruynooghe, M. Denecker, B. Van Nuffelen | Coherent Integration of Databases by Abductive Logic Programming | null | Journal Of Artificial Intelligence Research, Volume 21, pages
245-286, 2004 | 10.1613/jair.1322 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an abductive method for a coherent integration of independent
data-sources. The idea is to compute a list of data-facts that should be
inserted to the amalgamated database or retracted from it in order to restore
its consistency. This method is implemented by an abductive solver, called
Asystem, that applies SLDNFA-resolution on a meta-theory that relates
different, possibly contradicting, input databases. We also give a pure
model-theoretic analysis of the possible ways to `recover' consistent data from
an inconsistent database in terms of those models of the database that exhibit
as minimal inconsistent information as reasonably possible. This allows us to
characterize the `recovered databases' in terms of the `preferred' (i.e., most
consistent) models of the theory. The outcome is an abductive-based application
that is sound and complete with respect to a corresponding model-based,
preferential semantics, and -- to the best of our knowledge -- is more
expressive (thus more general) than any other implementation of coherent
integration of databases.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:34:53 GMT"
}
] | 1,309,737,600,000 | [
[
"Arieli",
"O.",
""
],
[
"Bruynooghe",
"M.",
""
],
[
"Denecker",
"M.",
""
],
[
"Van Nuffelen",
"B.",
""
]
] |
1107.0031 | P. Gorniak | P. Gorniak, D. Roy | Grounded Semantic Composition for Visual Scenes | null | Journal Of Artificial Intelligence Research, Volume 21, pages
429-470, 2004 | 10.1613/jair.1327 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a visually-grounded language understanding model based on a study
of how people verbally describe objects in scenes. The emphasis of the model is
on the combination of individual word meanings to produce meanings for complex
referring expressions. The model has been implemented, and it is able to
understand a broad range of spatial referring expressions. We describe our
implementation of word level visually-grounded semantics and their embedding in
a compositional parsing framework. The implemented system selects the correct
referents in response to natural language expressions for a large percentage of
test cases. In an analysis of the system's successes and failures we reveal how
visual context influences the semantics of utterances and propose future
extensions to the model that take such context into account.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:35:04 GMT"
}
] | 1,309,737,600,000 | [
[
"Gorniak",
"P.",
""
],
[
"Roy",
"D.",
""
]
] |
1107.0034 | K. M. Lochner | K. M. Lochner, D. M. Reeves, Y. Vorobeychik, M. P. Wellman | Price Prediction in a Trading Agent Competition | null | Journal Of Artificial Intelligence Research, Volume 21, pages
19-36, 2004 | 10.1613/jair.1333 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The 2002 Trading Agent Competition (TAC) presented a challenging market game
in the domain of travel shopping. One of the pivotal issues in this domain is
uncertainty about hotel prices, which have a significant influence on the
relative cost of alternative trip schedules. Thus, virtually all participants
employ some method for predicting hotel prices. We survey approaches employed
in the tournament, finding that agents apply an interesting diversity of
techniques, taking into account differing sources of evidence bearing on
prices. Based on data provided by entrants on their agents' actual predictions
in the TAC-02 finals and semifinals, we analyze the relative efficacy of these
approaches. The results show that taking into account game-specific information
about flight prices is a major distinguishing factor. Machine learning methods
effectively induce the relationship between flight and hotel prices from game
data, and a purely analytical approach based on competitive equilibrium
analysis achieves equal accuracy with no historical data. Employing a new
measure of prediction quality, we relate absolute accuracy to bottom-line
performance in the game.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:35:25 GMT"
}
] | 1,309,737,600,000 | [
[
"Lochner",
"K. M.",
""
],
[
"Reeves",
"D. M.",
""
],
[
"Vorobeychik",
"Y.",
""
],
[
"Wellman",
"M. P.",
""
]
] |
1107.0035 | J. Keppens | J. Keppens, Q. Shen | Compositional Model Repositories via Dynamic Constraint Satisfaction
with Order-of-Magnitude Preferences | null | Journal Of Artificial Intelligence Research, Volume 21, pages
499-550, 2004 | 10.1613/jair.1335 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The predominant knowledge-based approach to automated model construction,
compositional modelling, employs a set of models of particular functional
components. Its inference mechanism takes a scenario describing the constituent
interacting components of a system and translates it into a useful mathematical
model. This paper presents a novel compositional modelling approach aimed at
building model repositories. It furthers the field in two respects. Firstly, it
expands the application domain of compositional modelling to systems that can
not be easily described in terms of interacting functional components, such as
ecological systems. Secondly, it enables the incorporation of user preferences
into the model selection process. These features are achieved by casting the
compositional modelling problem as an activity-based dynamic preference
constraint satisfaction problem, where the dynamic constraints describe the
restrictions imposed over the composition of partial models and the preferences
correspond to those of the user of the automated modeller. In addition, the
preference levels are represented through the use of symbolic values that
differ in orders of magnitude.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:35:37 GMT"
}
] | 1,309,737,600,000 | [
[
"Keppens",
"J.",
""
],
[
"Shen",
"Q.",
""
]
] |
1107.0037 | R. Miikkulainen | R. Miikkulainen, K. O. Stanley | Competitive Coevolution through Evolutionary Complexification | null | Journal Of Artificial Intelligence Research, Volume 21, pages
63-100, 2004 | 10.1613/jair.1338 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two major goals in machine learning are the discovery and improvement of
solutions to complex problems. In this paper, we argue that complexification,
i.e. the incremental elaboration of solutions through adding new structure,
achieves both these goals. We demonstrate the power of complexification through
the NeuroEvolution of Augmenting Topologies (NEAT) method, which evolves
increasingly complex neural network architectures. NEAT is applied to an
open-ended coevolutionary robot duel domain where robot controllers compete
head to head. Because the robot duel domain supports a wide range of
strategies, and because coevolution benefits from an escalating arms race, it
serves as a suitable testbed for studying complexification. When compared to
the evolution of networks with fixed structure, complexifying evolution
discovers significantly more sophisticated strategies. The results suggest that
in order to discover and improve complex solutions, evolution, and search in
general, should be allowed to complexify as well as optimize.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:36:55 GMT"
}
] | 1,309,737,600,000 | [
[
"Miikkulainen",
"R.",
""
],
[
"Stanley",
"K. O.",
""
]
] |
1107.0038 | B. Hnich | B. Hnich, B. M. Smith, T. Walsh | Dual Modelling of Permutation and Injection Problems | null | Journal Of Artificial Intelligence Research, Volume 21, pages
357-391, 2004 | 10.1613/jair.1351 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When writing a constraint program, we have to choose which variables should
be the decision variables, and how to represent the constraints on these
variables. In many cases, there is considerable choice for the decision
variables. Consider, for example, permutation problems in which we have as many
values as variables, and each variable takes an unique value. In such problems,
we can choose between a primal and a dual viewpoint. In the dual viewpoint,
each dual variable represents one of the primal values, whilst each dual value
represents one of the primal variables. Alternatively, by means of channelling
constraints to link the primal and dual variables, we can have a combined model
with both sets of variables. In this paper, we perform an extensive theoretical
and empirical study of such primal, dual and combined models for two classes of
problems: permutation problems and injection problems. Our results show that it
often be advantageous to use multiple viewpoints, and to have constraints which
channel between them to maintain consistency. They also illustrate a general
methodology for comparing different constraint models.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:37:09 GMT"
}
] | 1,309,737,600,000 | [
[
"Hnich",
"B.",
""
],
[
"Smith",
"B. M.",
""
],
[
"Walsh",
"T.",
""
]
] |
1107.0040 | H. E. Dixon | H. E. Dixon, M. L. Ginsberg, A. J. Parkes | Generalizing Boolean Satisfiability I: Background and Survey of Existing
Work | null | Journal Of Artificial Intelligence Research, Volume 21, pages
193-243, 2004 | 10.1613/jair.1353 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the first of three planned papers describing ZAP, a satisfiability
engine that substantially generalizes existing tools while retaining the
performance characteristics of modern high-performance solvers. The fundamental
idea underlying ZAP is that many problems passed to such engines contain rich
internal structure that is obscured by the Boolean representation used; our
goal is to define a representation in which this structure is apparent and can
easily be exploited to improve computational performance. This paper is a
survey of the work underlying ZAP, and discusses previous attempts to improve
the performance of the Davis-Putnam-Logemann-Loveland algorithm by exploiting
the structure of the problem being solved. We examine existing ideas including
extensions of the Boolean language to allow cardinality constraints,
pseudo-Boolean representations, symmetry, and a limited form of quantification.
While this paper is intended as a survey, our research results are contained in
the two subsequent articles, with the theoretical structure of ZAP described in
the second paper in this series, and ZAP's implementation described in the
third.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:38:04 GMT"
}
] | 1,309,737,600,000 | [
[
"Dixon",
"H. E.",
""
],
[
"Ginsberg",
"M. L.",
""
],
[
"Parkes",
"A. J.",
""
]
] |
1107.0041 | A. Ben-Yair | A. Ben-Yair, A. Felner, S. Kraus, N. Netanyahu, R. Stern | PHA*: Finding the Shortest Path with A* in An Unknown Physical
Environment | null | Journal Of Artificial Intelligence Research, Volume 21, pages
631-670, 2004 | 10.1613/jair.1373 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of finding the shortest path between two points in an
unknown real physical environment, where a traveling agent must move around in
the environment to explore unknown territory. We introduce the Physical-A*
algorithm (PHA*) for solving this problem. PHA* expands all the mandatory nodes
that A* would expand and returns the shortest path between the two points.
However, due to the physical nature of the problem, the complexity of the
algorithm is measured by the traveling effort of the moving agent and not by
the number of generated nodes, as in standard A*. PHA* is presented as a
two-level algorithm, such that its high level, A*, chooses the next node to be
expanded and its low level directs the agent to that node in order to explore
it. We present a number of variations for both the high-level and low-level
procedures and evaluate their performance theoretically and experimentally. We
show that the travel cost of our best variation is fairly close to the optimal
travel cost, assuming that the mandatory nodes of A* are known in advance. We
then generalize our algorithm to the multi-agent case, where a number of
cooperative agents are designed to solve the problem. Specifically, we provide
an experimental implementation for such a system. It should be noted that the
problem addressed here is not a navigation problem, but rather a problem of
finding the shortest path between two points for future usage.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:38:33 GMT"
}
] | 1,309,737,600,000 | [
[
"Ben-Yair",
"A.",
""
],
[
"Felner",
"A.",
""
],
[
"Kraus",
"S.",
""
],
[
"Netanyahu",
"N.",
""
],
[
"Stern",
"R.",
""
]
] |
1107.0042 | N. L. Zhang | N. L. Zhang, W. Zhang | Restricted Value Iteration: Theory and Algorithms | null | Journal Of Artificial Intelligence Research, Volume 23, pages
123-165, 2005 | 10.1613/jair.1379 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value iteration is a popular algorithm for finding near optimal policies for
POMDPs. It is inefficient due to the need to account for the entire belief
space, which necessitates the solution of large numbers of linear programs. In
this paper, we study value iteration restricted to belief subsets. We show
that, together with properly chosen belief subsets, restricted value iteration
yields near-optimal policies and we give a condition for determining whether a
given belief subset would bring about savings in space and time. We also apply
restricted value iteration to two interesting classes of POMDPs, namely
informative POMDPs and near-discernible POMDPs.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:38:52 GMT"
}
] | 1,309,737,600,000 | [
[
"Zhang",
"N. L.",
""
],
[
"Zhang",
"W.",
""
]
] |
1107.0043 | D. Cohen | D. Cohen, M. Cooper, P. Jeavons, A. Krokhin | A Maximal Tractable Class of Soft Constraints | null | Journal Of Artificial Intelligence Research, Volume 22, pages
1-22, 2004 | 10.1613/jair.1400 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many researchers in artificial intelligence are beginning to explore the use
of soft constraints to express a set of (possibly conflicting) problem
requirements. A soft constraint is a function defined on a collection of
variables which associates some measure of desirability with each possible
combination of values for those variables. However, the crucial question of the
computational complexity of finding the optimal solution to a collection of
soft constraints has so far received very little attention. In this paper we
identify a class of soft binary constraints for which the problem of finding
the optimal solution is tractable. In other words, we show that for any given
set of such constraints, there exists a polynomial time algorithm to determine
the assignment having the best overall combined measure of desirability. This
tractable class includes many commonly-occurring soft constraints, such as 'as
near as possible' or 'as soon as possible after', as well as crisp constraints
such as 'greater than'. Finally, we show that this tractable class is maximal,
in the sense that adding any other form of soft binary constraint which is not
in the class gives rise to a class of problems which is NP-hard.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:39:17 GMT"
}
] | 1,309,737,600,000 | [
[
"Cohen",
"D.",
""
],
[
"Cooper",
"M.",
""
],
[
"Jeavons",
"P.",
""
],
[
"Krokhin",
"A.",
""
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.