id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1401.3875 | Wheeler Ruml | Wheeler Ruml, Minh Binh Do, Rong Zhou, Markus P.J. Fromherz | On-line Planning and Scheduling: An Application to Controlling Modular
Printers | null | Journal Of Artificial Intelligence Research, Volume 40, pages
415-468, 2011 | 10.1613/jair.3184 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a case study of artificial intelligence techniques applied to the
control of production printing equipment. Like many other real-world
applications, this complex domain requires high-speed autonomous
decision-making and robust continual operation. To our knowledge, this work
represents the first successful industrial application of embedded
domain-independent temporal planning. Our system handles execution failures and
multi-objective preferences. At its heart is an on-line algorithm that combines
techniques from state-space planning and partial-order scheduling. We suggest
that this general architecture may prove useful in other applications as more
intelligent systems operate in continual, on-line settings. Our system has been
used to drive several commercial prototypes and has enabled a new product
architecture for our industrial partner. When compared with state-of-the-art
off-line planners, our system is hundreds of times faster and often finds
better plans. Our experience demonstrates that domain-independent AI planning
based on heuristic search can flexibly handle time, resources, replanning, and
multiple objectives in a high-speed practical application without requiring
hand-coded control knowledge.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:10:17 GMT"
}
] | 1,389,916,800,000 | [
[
"Ruml",
"Wheeler",
""
],
[
"Do",
"Minh Binh",
""
],
[
"Zhou",
"Rong",
""
],
[
"Fromherz",
"Markus P. J.",
""
]
] |
1401.3881 | Mustafa Bilgic | Mustafa Bilgic, Lise Getoor | Value of Information Lattice: Exploiting Probabilistic Independence for
Effective Feature Subset Acquisition | null | Journal Of Artificial Intelligence Research, Volume 41, pages
69-95, 2011 | 10.1613/jair.3200 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the cost-sensitive feature acquisition problem, where
misclassifying an instance is costly but the expected misclassification cost
can be reduced by acquiring the values of the missing features. Because
acquiring the features is costly as well, the objective is to acquire the right
set of features so that the sum of the feature acquisition cost and
misclassification cost is minimized. We describe the Value of Information
Lattice (VOILA), an optimal and efficient feature subset acquisition framework.
Unlike the common practice, which is to acquire features greedily, VOILA can
reason with subsets of features. VOILA efficiently searches the space of
possible feature subsets by discovering and exploiting conditional independence
properties between the features and it reuses probabilistic inference
computations to further speed up the process. Through empirical evaluation on
five medical datasets, we show that the greedy strategy is often reluctant to
acquire features, as it cannot forecast the benefit of acquiring multiple
features in combination.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:12:42 GMT"
}
] | 1,389,916,800,000 | [
[
"Bilgic",
"Mustafa",
""
],
[
"Getoor",
"Lise",
""
]
] |
1401.3882 | Saket Joshi | Saket Joshi, Roni Khardon | Probabilistic Relational Planning with First Order Decision Diagrams | null | Journal Of Artificial Intelligence Research, Volume 41, pages
231-266, 2011 | 10.1613/jair.3205 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic programming algorithms have been successfully applied to
propositional stochastic planning problems by using compact representations, in
particular algebraic decision diagrams, to capture domain dynamics and value
functions. Work on symbolic dynamic programming lifted these ideas to first
order logic using several representation schemes. Recent work introduced a
first order variant of decision diagrams (FODD) and developed a value iteration
algorithm for this representation. This paper develops several improvements to
the FODD algorithm that make the approach practical. These include, new
reduction operators that decrease the size of the representation, several
speedup techniques, and techniques for value approximation. Incorporating
these, the paper presents a planning system, FODD-Planner, for solving
relational stochastic planning problems. The system is evaluated on several
domains, including problems from the recent international planning competition,
and shows competitive performance with top ranking systems. This is the first
demonstration of feasibility of this approach and it shows that abstraction
through compact representation is a promising approach to stochastic planning.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:13:02 GMT"
}
] | 1,389,916,800,000 | [
[
"Joshi",
"Saket",
""
],
[
"Khardon",
"Roni",
""
]
] |
1401.3885 | Tomas De la Rosa | Tomas De la Rosa, Sergio Jimenez, Raquel Fuentetaja, Daniel Borrajo | Scaling up Heuristic Planning with Relational Decision Trees | null | Journal Of Artificial Intelligence Research, Volume 40, pages
767-813, 2011 | 10.1613/jair.3231 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current evaluation functions for heuristic planning are expensive to compute.
In numerous planning problems these functions provide good guidance to the
solution, so they are worth the expense. However, when evaluation functions are
misguiding or when planning problems are large enough, lots of node evaluations
must be computed, which severely limits the scalability of heuristic planners.
In this paper, we present a novel solution for reducing node evaluations in
heuristic planning based on machine learning. Particularly, we define the task
of learning search control for heuristic planning as a relational
classification task, and we use an off-the-shelf relational classification tool
to address this learning task. Our relational classification task captures the
preferred action to select in the different planning contexts of a specific
planning domain. These planning contexts are defined by the set of helpful
actions of the current state, the goals remaining to be achieved, and the
static predicates of the planning task. This paper shows two methods for
guiding the search of a heuristic planner with the learned classifiers. The
first one consists of using the resulting classifier as an action policy. The
second one consists of applying the classifier to generate lookahead states
within a Best First Search algorithm. Experiments over a variety of domains
reveal that our heuristic planner using the learned classifiers solves larger
problems than state-of-the-art planners.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:14:42 GMT"
}
] | 1,389,916,800,000 | [
[
"De la Rosa",
"Tomas",
""
],
[
"Jimenez",
"Sergio",
""
],
[
"Fuentetaja",
"Raquel",
""
],
[
"Borrajo",
"Daniel",
""
]
] |
1401.3886 | Wei Li | Wei Li, Pascal Poupart, Peter van Beek | Exploiting Structure in Weighted Model Counting Approaches to
Probabilistic Inference | null | Journal Of Artificial Intelligence Research, Volume 40, pages
729-765, 2011 | 10.1613/jair.3232 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous studies have demonstrated that encoding a Bayesian network into a
SAT formula and then performing weighted model counting using a backtracking
search algorithm can be an effective method for exact inference. In this paper,
we present techniques for improving this approach for Bayesian networks with
noisy-OR and noisy-MAX relations---two relations that are widely used in
practice as they can dramatically reduce the number of probabilities one needs
to specify. In particular, we present two SAT encodings for noisy-OR and two
encodings for noisy-MAX that exploit the structure or semantics of the
relations to improve both time and space efficiency, and we prove the
correctness of the encodings. We experimentally evaluated our techniques on
large-scale real and randomly generated Bayesian networks. On these benchmarks,
our techniques gave speedups of up to two orders of magnitude over the best
previous approaches for networks with noisy-OR/MAX relations and scaled up to
larger networks. As well, our techniques extend the weighted model counting
approach for exact inference to networks that were previously intractable for
the approach.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:15:08 GMT"
}
] | 1,389,916,800,000 | [
[
"Li",
"Wei",
""
],
[
"Poupart",
"Pascal",
""
],
[
"van Beek",
"Peter",
""
]
] |
1401.3890 | Joerg Hoffmann | Joerg Hoffmann | Analyzing Search Topology Without Running Any Search: On the Connection
Between Causal Graphs and h+ | null | Journal Of Artificial Intelligence Research, Volume 41, pages
155-229, 2011 | 10.1613/jair.3276 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ignoring delete lists relaxation is of paramount importance for both
satisficing and optimal planning. In earlier work, it was observed that the
optimal relaxation heuristic h+ has amazing qualities in many classical
planning benchmarks, in particular pertaining to the complete absence of local
minima. The proofs of this are hand-made, raising the question whether such
proofs can be lead automatically by domain analysis techniques. In contrast to
earlier disappointing results -- the analysis method has exponential runtime
and succeeds only in two extremely simple benchmark domains -- we herein answer
this question in the affirmative. We establish connections between causal graph
structure and h+ topology. This results in low-order polynomial time analysis
methods, implemented in a tool we call TorchLight. Of the 12 domains where the
absence of local minima has been proved, TorchLight gives strong success
guarantees in 8 domains. Empirically, its analysis exhibits strong performance
in a further 2 of these domains, plus in 4 more domains where local minima may
exist but are rare. In this way, TorchLight can distinguish easy domains from
hard ones. By summarizing structural reasons for analysis failure, TorchLight
also provides diagnostic output indicating domain aspects that may cause local
minima.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:16:17 GMT"
}
] | 1,389,916,800,000 | [
[
"Hoffmann",
"Joerg",
""
]
] |
1401.3892 | Sajjad Ahmed Siddiqi | Sajjad Ahmed Siddiqi, Jinbo Huang | Sequential Diagnosis by Abstraction | null | Journal Of Artificial Intelligence Research, Volume 41, pages
329-365, 2011 | 10.1613/jair.3296 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When a system behaves abnormally, sequential diagnosis takes a sequence of
measurements of the system until the faults causing the abnormality are
identified, and the goal is to reduce the diagnostic cost, defined here as the
number of measurements. To propose measurement points, previous work employs a
heuristic based on reducing the entropy over a computed set of diagnoses. This
approach generally has good performance in terms of diagnostic cost, but can
fail to diagnose large systems when the set of diagnoses is too large. Focusing
on a smaller set of probable diagnoses scales the approach but generally leads
to increased average diagnostic costs. In this paper, we propose a new
diagnostic framework employing four new techniques, which scales to much larger
systems with good performance in terms of diagnostic cost. First, we propose a
new heuristic for measurement point selection that can be computed efficiently,
without requiring the set of diagnoses, once the system is modeled as a
Bayesian network and compiled into a logical form known as d-DNNF. Second, we
extend hierarchical diagnosis, a technique based on system abstraction from our
previous work, to handle probabilities so that it can be applied to sequential
diagnosis to allow larger systems to be diagnosed. Third, for the largest
systems where even hierarchical diagnosis fails, we propose a novel method that
converts the system into one that has a smaller abstraction and whose diagnoses
form a superset of those of the original system; the new system can then be
diagnosed and the result mapped back to the original system. Finally, we
propose a novel cost estimation function which can be used to choose an
abstraction of the system that is more likely to provide optimal average cost.
Experiments with ISCAS-85 benchmark circuits indicate that our approach scales
to all circuits in the suite except one that has a flat structure not
susceptible to useful abstraction.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:16:38 GMT"
}
] | 1,389,916,800,000 | [
[
"Siddiqi",
"Sajjad Ahmed",
""
],
[
"Huang",
"Jinbo",
""
]
] |
1401.3893 | Changhe Yuan | Changhe Yuan, Heejin Lim, Tsai-Ching Lu | Most Relevant Explanation in Bayesian Networks | null | Journal Of Artificial Intelligence Research, Volume 42, pages
309-352, 2011 | 10.1613/jair.3301 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major inference task in Bayesian networks is explaining why some variables
are observed in their particular states using a set of target variables.
Existing methods for solving this problem often generate explanations that are
either too simple (underspecified) or too complex (overspecified). In this
paper, we introduce a method called Most Relevant Explanation (MRE) which finds
a partial instantiation of the target variables that maximizes the generalized
Bayes factor (GBF) as the best explanation for the given evidence. Our study
shows that GBF has several theoretical properties that enable MRE to
automatically identify the most relevant target variables in forming its
explanation. In particular, conditional Bayes factor (CBF), defined as the GBF
of a new explanation conditioned on an existing explanation, provides a soft
measure on the degree of relevance of the variables in the new explanation in
explaining the evidence given the existing explanation. As a result, MRE is
able to automatically prune less relevant variables from its explanation. We
also show that CBF is able to capture well the explaining-away phenomenon that
is often represented in Bayesian networks. Moreover, we define two dominance
relations between the candidate solutions and use the relations to generalize
MRE to find a set of top explanations that is both diverse and representative.
Case studies on several benchmark diagnostic Bayesian networks show that MRE is
often able to find explanatory hypotheses that are not only precise but also
concise.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:17:05 GMT"
}
] | 1,389,916,800,000 | [
[
"Yuan",
"Changhe",
""
],
[
"Lim",
"Heejin",
""
],
[
"Lu",
"Tsai-Ching",
""
]
] |
1401.3895 | Wolfgang Dvorak | Wolfgang Dvorak, Stefan Woltran | On the Intertranslatability of Argumentation Semantics | null | Journal Of Artificial Intelligence Research, Volume 41, pages
445-475, 2011 | 10.1613/jair.3318 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Translations between different nonmonotonic formalisms always have been an
important topic in the field, in particular to understand the
knowledge-representation capabilities those formalisms offer. We provide such
an investigation in terms of different semantics proposed for abstract
argumentation frameworks, a nonmonotonic yet simple formalism which received
increasing interest within the last decade. Although the properties of these
different semantics are nowadays well understood, there are no explicit results
about intertranslatability. We provide such translations wrt. different
properties and also give a few novel complexity results which underlie some
negative results.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:17:48 GMT"
}
] | 1,389,916,800,000 | [
[
"Dvorak",
"Wolfgang",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1401.3899 | Ganesh Ram Santhanam | Ganesh Ram Santhanam, Samik Basu, Vasant Honavar | Representing and Reasoning with Qualitative Preferences for
Compositional Systems | null | Journal Of Artificial Intelligence Research, Volume 42, pages
211-274, 2011 | 10.1613/jair.3339 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many applications, e.g., Web service composition, complex system design, team
formation, etc., rely on methods for identifying collections of objects or
entities satisfying some functional requirement. Among the collections that
satisfy the functional requirement, it is often necessary to identify one or
more collections that are optimal with respect to user preferences over a set
of attributes that describe the non-functional properties of the collection.
We develop a formalism that lets users express the relative importance among
attributes and qualitative preferences over the valuations of each attribute.
We define a dominance relation that allows us to compare collections of objects
in terms of preferences over attributes of the objects that make up the
collection. We establish some key properties of the dominance relation. In
particular, we show that the dominance relation is a strict partial order when
the intra-attribute preference relations are strict partial orders and the
relative importance preference relation is an interval order.
We provide algorithms that use this dominance relation to identify the set of
most preferred collections. We show that under certain conditions, the
algorithms are guaranteed to return only (sound), all (complete), or at least
one (weakly complete) of the most preferred collections. We present results of
simulation experiments comparing the proposed algorithms with respect to (a)
the quality of solutions (number of most preferred solutions) produced by the
algorithms, and (b) their performance and efficiency. We also explore some
interesting conjectures suggested by the results of our experiments that relate
the properties of the user preferences, the dominance relation, and the
algorithms.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:19:43 GMT"
}
] | 1,389,916,800,000 | [
[
"Santhanam",
"Ganesh Ram",
""
],
[
"Basu",
"Samik",
""
],
[
"Honavar",
"Vasant",
""
]
] |
1401.3905 | Ko-Hsin Cindy Wang | Ko-Hsin Cindy Wang, Adi Botea | MAPP: a Scalable Multi-Agent Path Planning Algorithm with Tractability
and Completeness Guarantees | null | Journal Of Artificial Intelligence Research, Volume 42, pages
55-90, 2011 | 10.1613/jair.3370 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-agent path planning is a challenging problem with numerous real-life
applications. Running a centralized search such as A* in the combined state
space of all units is complete and cost-optimal, but scales poorly, as the
state space size is exponential in the number of mobile units. Traditional
decentralized approaches, such as FAR and WHCA*, are faster and more scalable,
being based on problem decomposition. However, such methods are incomplete and
provide no guarantees with respect to the running time or the solution quality.
They are not necessarily able to tell in a reasonable time whether they would
succeed in finding a solution to a given instance. We introduce MAPP, a
tractable algorithm for multi-agent path planning on undirected graphs. We
present a basic version and several extensions. They have low-polynomial
worst-case upper bounds for the running time, the memory requirements, and the
length of solutions. Even though all algorithmic versions are incomplete in the
general case, each provides formal guarantees on problems it can solve. For
each version, we discuss the algorithms completeness with respect to clearly
defined subclasses of instances. Experiments were run on realistic game grid
maps. MAPP solved 99.86% of all mobile units, which is 18--22% better than the
percentage of FAR and WHCA*. MAPP marked 98.82% of all units as provably
solvable during the first stage of plan computation. Parts of MAPPs computation
can be re-used across instances on the same map. Speed-wise, MAPP is
competitive or significantly faster than WHCA*, depending on whether MAPP
performs all computations from scratch. When data that MAPP can re-use are
preprocessed offline and readily available, MAPP is slower than the very fast
FAR algorithm by a factor of 2.18 on average. MAPPs solutions are on average
20% longer than FARs solutions and 7--31% longer than WHCA*s solutions.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:21:59 GMT"
}
] | 1,389,916,800,000 | [
[
"Wang",
"Ko-Hsin Cindy",
""
],
[
"Botea",
"Adi",
""
]
] |
1401.3910 | Peng Dai | Peng Dai, Mausam, Daniel Sabby Weld, Judy Goldsmith | Topological Value Iteration Algorithms | null | Journal Of Artificial Intelligence Research, Volume 42, pages
181-209, 2011 | 10.1613/jair.3390 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Value iteration is a powerful yet inefficient algorithm for Markov decision
processes (MDPs) because it puts the majority of its effort into backing up the
entire state space, which turns out to be unnecessary in many cases. In order
to overcome this problem, many approaches have been proposed. Among them, ILAO*
and variants of RTDP are state-of-the-art ones. These methods use reachability
analysis and heuristic search to avoid some unnecessary backups. However, none
of these approaches build the graphical structure of the state transitions in a
pre-processing step or use the structural information to systematically
decompose a problem, whereby generating an intelligent backup sequence of the
state space. In this paper, we present two optimal MDP algorithms. The first
algorithm, topological value iteration (TVI), detects the structure of MDPs and
backs up states based on topological sequences. It (1) divides an MDP into
strongly-connected components (SCCs), and (2) solves these components
sequentially. TVI outperforms VI and other state-of-the-art algorithms vastly
when an MDP has multiple, close-to-equal-sized SCCs. The second algorithm,
focused topological value iteration (FTVI), is an extension of TVI. FTVI
restricts its attention to connected components that are relevant for solving
the MDP. Specifically, it uses a small amount of heuristic search to eliminate
provably sub-optimal actions; this pruning allows FTVI to find smaller
connected components, thus running faster. We demonstrate that FTVI outperforms
TVI by an order of magnitude, averaged across several domains. Surprisingly,
FTVI also significantly outperforms popular heuristically-informed MDP
algorithms such as ILAO*, LRTDP, BRTDP and Bayesian-RTDP in many domains,
sometimes by as much as two orders of magnitude. Finally, we characterize the
type of domains where FTVI excels --- suggesting a way to an informed choice of
solver.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 05:24:38 GMT"
}
] | 1,389,916,800,000 | [
[
"Dai",
"Peng",
""
],
[
"Mausam",
"",
""
],
[
"Weld",
"Daniel Sabby",
""
],
[
"Goldsmith",
"Judy",
""
]
] |
1401.4144 | Yves Moinard | Philippe Besnard (INRIA - IRISA, IRIT), Marie-Odile Cordier (INRIA -
IRISA, UR1), Yves Moinard (INRIA - IRISA) | Arguments using ontological and causal knowledge | null | JIAF 2013 (Septi\`emes Journ\'ees de l'Intelligence Artificielle
Fondamentale) (2013) 41-48 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate an approach to reasoning about causes through argumentation.
We consider a causal model for a physical system, and look for arguments about
facts. Some arguments are meant to provide explanations of facts whereas some
challenge these explanations and so on. At the root of argumentation here, are
causal links ({A_1, ... ,A_n} causes B) and ontological links (o_1 is_a o_2).
We present a system that provides a candidate explanation ({A_1, ... ,A_n}
explains {B_1, ... ,B_m}) by resorting to an underlying causal link
substantiated with appropriate ontological links. Argumentation is then at work
from these various explaining links. A case study is developed: a severe storm
Xynthia that devastated part of France in 2010, with an unaccountably high
number of casualties.
| [
{
"version": "v1",
"created": "Thu, 16 Jan 2014 19:49:42 GMT"
}
] | 1,389,916,800,000 | [
[
"Besnard",
"Philippe",
"",
"INRIA - IRISA, IRIT"
],
[
"Cordier",
"Marie-Odile",
"",
"INRIA -\n IRISA, UR1"
],
[
"Moinard",
"Yves",
"",
"INRIA - IRISA"
]
] |
1401.4539 | S.M. Ferdous | S.M. Ferdous, M. Sohel Rahman | Solving the Minimum Common String Partition Problem with the Help of
Ants | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider the problem of finding a minimum common partition
of two strings. The problem has its application in genome comparison. As it is
an NP-hard, discrete combinatorial optimization problem, we employ a
metaheuristic technique, namely, MAX-MIN ant system to solve this problem. To
achieve better efficiency we first map the problem instance into a special kind
of graph. Subsequently, we employ a MAX-MIN ant system to achieve high quality
solutions for the problem. Experimental results show the superiority of our
algorithm in comparison with the state of art algorithm in the literature. The
improvement achieved is also justified by standard statistical test.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 13:15:30 GMT"
},
{
"version": "v2",
"created": "Wed, 21 May 2014 06:35:41 GMT"
}
] | 1,434,931,200,000 | [
[
"Ferdous",
"S. M.",
""
],
[
"Rahman",
"M. Sohel",
""
]
] |
1401.4592 | Jiri Baum | Jiri Baum, Ann E. Nicholson, Trevor I. Dix | Proximity-Based Non-uniform Abstractions for Approximate Planning | null | Journal Of Artificial Intelligence Research, Volume 43, pages
477-522, 2012 | 10.1613/jair.3414 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a deterministic world, a planning agent can be certain of the consequences
of its planned sequence of actions. Not so, however, in dynamic, stochastic
domains where Markov decision processes are commonly used. Unfortunately these
suffer from the curse of dimensionality: if the state space is a Cartesian
product of many small sets (dimensions), planning is exponential in the number
of those dimensions.
Our new technique exploits the intuitive strategy of selectively ignoring
various dimensions in different parts of the state space. The resulting
non-uniformity has strong implications, since the approximation is no longer
Markovian, requiring the use of a modified planner. We also use a spatial and
temporal proximity measure, which responds to continued planning as well as
movement of the agent through the state space, to dynamically adapt the
abstraction as planning progresses.
We present qualitative and quantitative results across a range of
experimental domains showing that an agent exploiting this novel approximation
method successfully finds solutions to the planning problem using much less
than the full state space. We assess and analyse the features of domains which
our method can exploit.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:03:58 GMT"
}
] | 1,390,262,400,000 | [
[
"Baum",
"Jiri",
""
],
[
"Nicholson",
"Ann E.",
""
],
[
"Dix",
"Trevor I.",
""
]
] |
1401.4595 | Na Fu | Na Fu, Hoong Chuin Lau, Pradeep R. Varakantham, Fei Xiao | Robust Local Search for Solving RCPSP/max with Durational Uncertainty | null | Journal Of Artificial Intelligence Research, Volume 43, pages
43-86, 2012 | 10.1613/jair.3424 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scheduling problems in manufacturing, logistics and project management have
frequently been modeled using the framework of Resource Constrained Project
Scheduling Problems with minimum and maximum time lags (RCPSP/max). Due to the
importance of these problems, providing scalable solution schedules for
RCPSP/max problems is a topic of extensive research. However, all existing
methods for solving RCPSP/max assume that durations of activities are known
with certainty, an assumption that does not hold in real world scheduling
problems where unexpected external events such as manpower availability,
weather changes, etc. lead to delays or advances in completion of activities.
Thus, in this paper, our focus is on providing a scalable method for solving
RCPSP/max problems with durational uncertainty. To that end, we introduce the
robust local search method consisting of three key ideas: (a) Introducing and
studying the properties of two decision rule approximations used to compute
start times of activities with respect to dynamic realizations of the
durational uncertainty; (b) Deriving the expression for robust makespan of an
execution strategy based on decision rule approximations; and (c) A robust
local search mechanism to efficiently compute activity execution strategies
that are robust against durational uncertainty. Furthermore, we also provide
enhancements to local search that exploit temporal dependencies between
activities. Our experimental results illustrate that robust local search is
able to provide robust execution strategies efficiently.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:04:55 GMT"
}
] | 1,390,262,400,000 | [
[
"Fu",
"Na",
""
],
[
"Lau",
"Hoong Chuin",
""
],
[
"Varakantham",
"Pradeep R.",
""
],
[
"Xiao",
"Fei",
""
]
] |
1401.4597 | Matthew L. Ginsberg | Matthew L. Ginsberg | Dr.Fill: Crosswords and an Implemented Solver for Singly Weighted CSPs | null | Journal Of Artificial Intelligence Research, Volume 42, pages
851-886, 2011 | 10.1613/jair.3437 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe Dr.Fill, a program that solves American-style crossword puzzles.
From a technical perspective, Dr.Fill works by converting crosswords to
weighted CSPs, and then using a variety of novel techniques to find a solution.
These techniques include generally applicable heuristics for variable and value
selection, a variant of limited discrepancy search, and postprocessing and
partitioning ideas. Branch and bound is not used, as it was incompatible with
postprocessing and was determined experimentally to be of little practical
value. Dr.Fillls performance on crosswords from the American Crossword Puzzle
Tournament suggests that it ranks among the top fifty or so crossword solvers
in the world.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:05:30 GMT"
}
] | 1,390,262,400,000 | [
[
"Ginsberg",
"Matthew L.",
""
]
] |
1401.4598 | Ruoyun Huang | Ruoyun Huang, Yixin Chen, Weixiong Zhang | SAS+ Planning as Satisfiability | null | Journal Of Artificial Intelligence Research, Volume 43, pages
293-328, 2012 | 10.1613/jair.3442 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning as satisfiability is a principal approach to planning with many
eminent advantages. The existing planning as satisfiability techniques usually
use encodings compiled from STRIPS. We introduce a novel SAT encoding scheme
(SASE) based on the SAS+ formalism. The new scheme exploits the structural
information in SAS+, resulting in an encoding that is both more compact and
efficient for planning. We prove the correctness of the new encoding by
establishing an isomorphism between the solution plans of SASE and that of
STRIPS based encodings. We further analyze the transition variables newly
introduced in SASE to explain why it accommodates modern SAT solving algorithms
and improves performance. We give empirical statistical results to support our
analysis. We also develop a number of techniques to further reduce the encoding
size of SASE, and conduct experimental studies to show the strength of each
individual technique. Finally, we report extensive experimental results to
demonstrate significant improvements of SASE over the state-of-the-art STRIPS
based encoding schemes in terms of both time and memory efficiency.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:05:52 GMT"
}
] | 1,390,262,400,000 | [
[
"Huang",
"Ruoyun",
""
],
[
"Chen",
"Yixin",
""
],
[
"Zhang",
"Weixiong",
""
]
] |
1401.4600 | Yifeng Zeng | Yifeng Zeng, Prashant Doshi | Exploiting Model Equivalences for Solving Interactive Dynamic Influence
Diagrams | null | Journal Of Artificial Intelligence Research, Volume 43, pages
211-255, 2012 | 10.1613/jair.3461 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on the problem of sequential decision making in partially observable
environments shared with other agents of uncertain types having similar or
conflicting objectives. This problem has been previously formalized by multiple
frameworks one of which is the interactive dynamic influence diagram (I-DID),
which generalizes the well-known influence diagram to the multiagent setting.
I-DIDs are graphical models and may be used to compute the policy of an agent
given its belief over the physical state and others models, which changes as
the agent acts and observes in the multiagent setting.
As we may expect, solving I-DIDs is computationally hard. This is
predominantly due to the large space of candidate models ascribed to the other
agents and its exponential growth over time. We present two methods for
reducing the size of the model space and stemming its exponential growth. Both
these methods involve aggregating individual models into equivalence classes.
Our first method groups together behaviorally equivalent models and selects
only those models for updating which will result in predictive behaviors that
are distinct from others in the updated model space. The second method further
compacts the model space by focusing on portions of the behavioral predictions.
Specifically, we cluster actionally equivalent models that prescribe identical
actions at a single time step. Exactly identifying the equivalences would
require us to solve all models in the initial set. We avoid this by selectively
solving some of the models, thereby introducing an approximation. We discuss
the error introduced by the approximation, and empirically demonstrate the
improved efficiency in solving I-DIDs due to the equivalences.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:09:03 GMT"
}
] | 1,390,262,400,000 | [
[
"Zeng",
"Yifeng",
""
],
[
"Doshi",
"Prashant",
""
]
] |
1401.4601 | Gilles Pesant | Gilles Pesant, Claude-Guy Quimper, Alessandro Zanarini | Counting-Based Search: Branching Heuristics for Constraint Satisfaction
Problems | null | Journal Of Artificial Intelligence Research, Volume 43, pages
173-210, 2012 | 10.1613/jair.3463 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing a search heuristic for constraint programming that is reliable
across problem domains has been an important research topic in recent years.
This paper concentrates on one family of candidates: counting-based search.
Such heuristics seek to make branching decisions that preserve most of the
solutions by determining what proportion of solutions to each individual
constraint agree with that decision. Whereas most generic search heuristics in
constraint programming rely on local information at the level of the individual
variable, our search heuristics are based on more global information at the
constraint level. We design several algorithms that are used to count the
number of solutions to specific families of constraints and propose some search
heuristics exploiting such information. The experimental part of the paper
considers eight problem domains ranging from well-established benchmark puzzles
to rostering and sport scheduling. An initial empirical analysis identifies
heuristic maxSD as a robust candidate among our proposals.eWe then evaluate the
latter against the state of the art, including the latest generic search
heuristics, restarts, and discrepancy-based tree traversals. Experimental
results show that counting-based search generally outperforms other generic
heuristics.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:09:25 GMT"
}
] | 1,390,262,400,000 | [
[
"Pesant",
"Gilles",
""
],
[
"Quimper",
"Claude-Guy",
""
],
[
"Zanarini",
"Alessandro",
""
]
] |
1401.4606 | Patrick Raymond Conrad | Patrick Raymond Conrad, Brian Williams | Drake: An Efficient Executive for Temporal Plans with Choice | null | Journal Of Artificial Intelligence Research, Volume 42, pages
607-659, 2011 | 10.1613/jair.3478 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents Drake, a dynamic executive for temporal plans with choice.
Dynamic plan execution strategies allow an autonomous agent to react quickly to
unfolding events, improving the robustness of the agent. Prior work developed
methods for dynamically dispatching Simple Temporal Networks, and further
research enriched the expressiveness of the plans executives could handle,
including discrete choices, which are the focus of this work. However, in some
approaches to date, these additional choices induce significant storage or
latency requirements to make flexible execution possible.
Drake is designed to leverage the low latency made possible by a
preprocessing step called compilation, while avoiding high memory costs through
a compact representation. We leverage the concepts of labels and environments,
taken from prior work in Assumption-based Truth Maintenance Systems (ATMS), to
concisely record the implications of the discrete choices, exploiting the
structure of the plan to avoid redundant reasoning or storage. Our labeling and
maintenance scheme, called the Labeled Value Set Maintenance System, is
distinguished by its focus on properties fundamental to temporal problems, and,
more generally, weighted graph algorithms. In particular, the maintenance
system focuses on maintaining a minimal representation of non-dominated
constraints. We benchmark Drakes performance on random structured problems, and
find that Drake reduces the size of the compiled representation by a factor of
over 500 for large problems, while incurring only a modest increase in run-time
latency, compared to prior work in compiled executives for temporal plans with
discrete choices.
| [
{
"version": "v1",
"created": "Sat, 18 Jan 2014 21:10:40 GMT"
}
] | 1,390,262,400,000 | [
[
"Conrad",
"Patrick Raymond",
""
],
[
"Williams",
"Brian",
""
]
] |
1401.4942 | Gordana Dodig Crnkovic | Gordana Dodig-Crnkovic | Info-computational constructivism in modelling of life as cognition | 5 pages, SMLC conference University of Bergamo 12-14.09.2013,
http://www.pt-ai.org/smlc/2013/schedule | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper addresses the open question formulated as: Which levels of
abstraction are appropriate in the synthetic modelling of life and cognition?
within the framework of info-computational constructivism, treating natural
phenomena as computational processes on informational structures. At present we
lack the common understanding of the processes of life and cognition in living
organisms with the details of co-construction of informational structures and
computational processes in embodied, embedded cognizing agents, both living and
artifactual ones. Starting with the definition of an agent as an entity capable
of acting on its own behalf, as an actor in Hewitt Actor model of computation,
even so simple systems as molecules can be modelled as actors exchanging
messages (information). We adopt Kauffmans view of a living agent as something
that can reproduce and undergoes at least one thermodynamic work cycle. This
definition of living agents leads to the Maturana and Varelas identification of
life with cognition. Within the info-computational constructive approach to
living beings as cognizing agents, from the simplest to the most complex living
systems, mechanisms of cognition can be studied in order to construct synthetic
model classes of artifactual cognizing agents on different levels of
organization.
| [
{
"version": "v1",
"created": "Sat, 2 Nov 2013 21:43:45 GMT"
}
] | 1,390,262,400,000 | [
[
"Dodig-Crnkovic",
"Gordana",
""
]
] |
1401.5156 | Juliana Wahid | Juliana Wahid, Naimah Mohd Hussin | Harmony Search Algorithm for Curriculum-Based Course Timetabling Problem | null | International Journal of Soft Computing and Software Engineering
[JSCSE], Vol. 3, No. 3, pp. 365-371, 2013 | 10.7321/jscse.v3.n3.55 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, harmony search algorithm is applied to curriculum-based course
timetabling. The implementation, specifically the process of improvisation
consists of memory consideration, random consideration and pitch adjustment. In
memory consideration, the value of the course number for new solution was
selected from all other course number located in the same column of the Harmony
Memory. This research used the highest occurrence of the course number to be
scheduled in a new harmony. The remaining courses that have not been scheduled
by memory consideration will go through random consideration, i.e. will select
any feasible location available to be scheduled in the new harmony solution.
Each course scheduled out of memory consideration is examined as to whether it
should be pitch adjusted with probability of eight procedures. However, the
algorithm produced results that were not comparatively better than those
previously known as best solution. With proper modification in terms of the
approach in this algorithm would make the algorithm perform better on
curriculum-based course timetabling.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2014 03:01:50 GMT"
}
] | 1,390,348,800,000 | [
[
"Wahid",
"Juliana",
""
],
[
"Hussin",
"Naimah Mohd",
""
]
] |
1401.5157 | Toshiyuki Maeda | Toshiyuki Maeda, Masanori Fujii, Isao Hayashi | Skill Analysis with Time Series Image Data | 5 pages, 6 figures | International Journal of Soft Computing and Software Engineering
[JSCSE], Vol. 3, No. 3, pp. 576-580, 2013 | 10.7321/jscse.v3.n3.87 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a skill analysis with time series image data using data mining
methods, focused on table tennis. We do not use body model, but use only
hi-speed movies, from which time series data are obtained and analyzed using
data mining methods such as C4.5 and so on. We identify internal models for
technical skills as evaluation skillfulness for the forehand stroke of table
tennis, and discuss mono and meta-functional skills for improving skills.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2014 03:03:56 GMT"
}
] | 1,390,348,800,000 | [
[
"Maeda",
"Toshiyuki",
""
],
[
"Fujii",
"Masanori",
""
],
[
"Hayashi",
"Isao",
""
]
] |
1401.5424 | Roy Hayes Jr | Roy Hayes, Peter Beling, William Scherer | Real Time Strategy Language | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real Time Strategy (RTS) games provide complex domain to test the latest
artificial intelligence (AI) research. In much of the literature, AI systems
have been limited to playing one game. Although, this specialization has
resulted in stronger AI gaming systems it does not address the key concerns of
AI researcher. AI researchers seek the development of AI agents that can
autonomously interpret learn, and apply new knowledge. To achieve human level
performance, current AI systems rely on game specific knowledge of an expert.
The paper presents the full RTS language in hopes of shifting the current
research focus to the development of general RTS agents. General RTS agents are
AI gaming systems that can play any RTS games, defined in the RTS language.
This prevents game specific knowledge from being hard coded into the system,
thereby facilitating research that addresses the fundamental concerns of
artificial intelligence.
| [
{
"version": "v1",
"created": "Tue, 21 Jan 2014 19:14:22 GMT"
}
] | 1,390,348,800,000 | [
[
"Hayes",
"Roy",
""
],
[
"Beling",
"Peter",
""
],
[
"Scherer",
"William",
""
]
] |
1401.5813 | Adrian Lancucki | Adrian {\L}a\'ncucki | GGP with Advanced Reasoning and Board Knowledge Discovery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quality of General Game Playing (GGP) matches suffers from slow
state-switching and weak knowledge modules. Instantiation and Propositional
Networks offer great performance gains over Prolog-based reasoning, but do not
scale well. In this publication mGDL, a variant of GDL stripped of function
constants, has been defined as a basis for simple reasoning machines. mGDL
allows to easily map rules to C++ functions. 253 out of 270 tested GDL rule
sheets conformed to mGDL without any modifications; the rest required minor
changes. A revised (m)GDL to C++ translation scheme has been reevaluated; it
brought gains ranging from 28% to 7300% over YAP Prolog, managing to compile
even demanding rule sheets under few seconds. For strengthening game knowledge,
spatial features inspired by similar successful techniques from computer Go
have been proposed. For they required an Euclidean metric, a small board
extension to GDL has been defined through a set of ground atomic sentences. An
SGA-based genetic algorithm has been designed for tweaking game parameters and
conducting self-plays, so the features could be mined from meaningful game
records. The approach has been tested on a small cluster, giving performance
gains up to 20% more wins against the baseline UCT player. Implementations of
proposed ideas constitutes the core of GGP Spatium - a small C++/Python GGP
framework, created for developing compact GGP Players and problem solvers.
| [
{
"version": "v1",
"created": "Wed, 22 Jan 2014 21:52:49 GMT"
}
] | 1,390,521,600,000 | [
[
"Łańcucki",
"Adrian",
""
]
] |
1401.5848 | Christer B\"ackstr\"om | Christer B\"ackstr\"om, Peter Jonsson | Algorithms and Limits for Compact Plan Representations | null | Journal Of Artificial Intelligence Research, Volume 44, pages
141-177, 2012 | 10.1613/jair.3534 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compact representations of objects is a common concept in computer science.
Automated planning can be viewed as a case of this concept: a planning instance
is a compact implicit representation of a graph and the problem is to find a
path (a plan) in this graph. While the graphs themselves are represented
compactly as planning instances, the paths are usually represented explicitly
as sequences of actions. Some cases are known where the plans always have
compact representations, for example, using macros. We show that these results
do not extend to the general case, by proving a number of bounds for compact
representations of plans under various criteria, like efficient sequential or
random access of actions. In addition to this, we show that our results have
consequences for what can be gained from reformulating planning into some other
problem. As a contrast to this we also prove a number of positive results,
demonstrating restricted cases where plans do have useful compact
representations, as well as proving that macro plans have favourable access
properties. Our results are finally discussed in relation to other relevant
contexts.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:41:51 GMT"
}
] | 1,390,521,600,000 | [
[
"Bäckström",
"Christer",
""
],
[
"Jonsson",
"Peter",
""
]
] |
1401.5854 | Carlos Hern\'andez | Carlos Hern\'andez, Jorge A Baier | Avoiding and Escaping Depressions in Real-Time Heuristic Search | null | Journal Of Artificial Intelligence Research, Volume 43, pages
523-570, 2012 | 10.1613/jair.3590 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Heuristics used for solving hard real-time search problems have regions with
depressions. Such regions are bounded areas of the search space in which the
heuristic function is inaccurate compared to the actual cost to reach a
solution. Early real-time search algorithms, like LRTA*, easily become trapped
in those regions since the heuristic values of their states may need to be
updated multiple times, which results in costly solutions. State-of-the-art
real-time search algorithms, like LSS-LRTA* or LRTA*(k), improve LRTA*s
mechanism to update the heuristic, resulting in improved performance. Those
algorithms, however, do not guide search towards avoiding depressed regions.
This paper presents depression avoidance, a simple real-time search principle
to guide search towards avoiding states that have been marked as part of a
heuristic depression. We propose two ways in which depression avoidance can be
implemented: mark-and-avoid and move-to-border. We implement these strategies
on top of LSS-LRTA* and RTAA*, producing 4 new real-time heuristic search
algorithms: aLSS-LRTA*, daLSS-LRTA*, aRTAA*, and daRTAA*. When the objective is
to find a single solution by running the real-time search algorithm once, we
show that daLSS-LRTA* and daRTAA* outperform their predecessors sometimes by
one order of magnitude. Of the four new algorithms, daRTAA* produces the best
solutions given a fixed deadline on the average time allowed per planning
episode. We prove all our algorithms have good theoretical properties: in
finite search spaces, they find a solution if one exists, and converge to an
optimal after a number of trials.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:45:02 GMT"
}
] | 1,390,521,600,000 | [
[
"Hernández",
"Carlos",
""
],
[
"Baier",
"Jorge A",
""
]
] |
1401.5856 | Patrik Haslum | Patrik Haslum | Narrative Planning: Compilations to Classical Planning | null | Journal Of Artificial Intelligence Research, Volume 44, pages
383-395, 2012 | 10.1613/jair.3602 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A model of story generation recently proposed by Riedl and Young casts it as
planning, with the additional condition that story characters behave
intentionally. This means that characters have perceivable motivation for the
actions they take. I show that this condition can be compiled away (in more
ways than one) to produce a classical planning problem that can be solved by an
off-the-shelf classical planner, more efficiently than by Riedl and Youngs
specialised planner.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:46:00 GMT"
}
] | 1,390,521,600,000 | [
[
"Haslum",
"Patrik",
""
]
] |
1401.5857 | Amanda J. Coles | Amanda J. Coles, Andrew I. Coles, Maria Fox, Derek Long | COLIN: Planning with Continuous Linear Numeric Change | null | Journal Of Artificial Intelligence Research, Volume 44, pages
1-96, 2012 | 10.1613/jair.3608 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we describe COLIN, a forward-chaining heuristic search planner,
capable of reasoning with COntinuous LINear numeric change, in addition to the
full temporal semantics of PDDL. Through this work we make two advances to the
state-of-the-art in terms of expressive reasoning capabilities of planners: the
handling of continuous linear change, and the handling of duration-dependent
effects in combination with duration inequalities, both of which require
tightly coupled temporal and numeric reasoning during planning. COLIN combines
FF-style forward chaining search, with the use of a Linear Program (LP) to
check the consistency of the interacting temporal and numeric constraints at
each state. The LP is used to compute bounds on the values of variables in each
state, reducing the range of actions that need to be considered for
application. In addition, we develop an extension of the Temporal Relaxed
Planning Graph heuristic of CRIKEY3, to support reasoning directly with
continuous change. We extend the range of task variables considered to be
suitable candidates for specifying the gradient of the continuous numeric
change effected by an action. Finally, we explore the potential for employing
mixed integer programming as a tool for optimising the timestamps of the
actions in the plan, once a solution has been found. To support this, we
further contribute a selection of extended benchmark domains that include
continuous numeric effects. We present results for COLIN that demonstrate its
scalability on a range of benchmarks, and compare to existing state-of-the-art
planners.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:46:26 GMT"
}
] | 1,390,521,600,000 | [
[
"Coles",
"Amanda J.",
""
],
[
"Coles",
"Andrew I.",
""
],
[
"Fox",
"Maria",
""
],
[
"Long",
"Derek",
""
]
] |
1401.5859 | Maria Fox | Maria Fox, Derek Long, Daniele Magazzeni | Plan-based Policies for Efficient Multiple Battery Load Management | null | Journal Of Artificial Intelligence Research, Volume 44, pages
335-382, 2012 | 10.1613/jair.3643 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient use of multiple batteries is a practical problem with wide and
growing application. The problem can be cast as a planning problem under
uncertainty. We describe the approach we have adopted to modelling and solving
this problem, seen as a Markov Decision Problem, building effective policies
for battery switching in the face of stochastic load profiles.
Our solution exploits and adapts several existing techniques: planning for
deterministic mixed discrete-continuous problems and Monte Carlo sampling for
policy learning. The paper describes the development of planning techniques to
allow solution of the non-linear continuous dynamic models capturing the
battery behaviours. This approach depends on carefully handled discretisation
of the temporal dimension. The construction of policies is performed using a
classification approach and this idea offers opportunities for wider
exploitation in other problems. The approach and its generality are described
in the paper.
Application of the approach leads to construction of policies that, in
simulation, significantly outperform those that are currently in use and the
best published solutions to the battery management problem. We achieve
solutions that achieve more than 99% efficiency in simulation compared with the
theoretical limit and do so with far fewer battery switches than existing
policies. Behaviour of physical batteries does not exactly match the simulated
models for many reasons, so to confirm that our theoretical results can lead to
real measured improvements in performance we also conduct and report
experiments using a physical test system. These results demonstrate that we can
obtain 5%-15% improvement in lifetimes in the case of a two battery system.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:47:47 GMT"
}
] | 1,390,521,600,000 | [
[
"Fox",
"Maria",
""
],
[
"Long",
"Derek",
""
],
[
"Magazzeni",
"Daniele",
""
]
] |
1401.5860 | Ignasi Ab\'io | Ignasi Ab\'io, Robert Nieuwenhuis, Albert Oliveras, Enric
Rodriguez-Carbonell, Valentin Mayer-Eichberger | A New Look at BDDs for Pseudo-Boolean Constraints | null | Journal Of Artificial Intelligence Research, Volume 45, pages
443-480, 2012 | 10.1613/jair.3653 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pseudo-Boolean constraints are omnipresent in practical applications, and
thus a significant effort has been devoted to the development of good SAT
encoding techniques for them. Some of these encodings first construct a Binary
Decision Diagram (BDD) for the constraint, and then encode the BDD into a
propositional formula. These BDD-based approaches have some important
advantages, such as not being dependent on the size of the coefficients, or
being able to share the same BDD for representing many constraints.
We first focus on the size of the resulting BDDs, which was considered to be
an open problem in our research community. We report on previous work where it
was proved that there are Pseudo-Boolean constraints for which no polynomial
BDD exists. We also give an alternative and simpler proof assuming that NP is
different from Co-NP. More interestingly, here we also show how to overcome the
possible exponential blowup of BDDs by phcoefficient decomposition. This allows
us to give the first polynomial generalized arc-consistent ROBDD-based encoding
for Pseudo-Boolean constraints.
Finally, we focus on practical issues: we show how to efficiently construct
such ROBDDs, how to encode them into SAT with only 2 clauses per node, and
present experimental results that confirm that our approach is competitive with
other encodings and state-of-the-art Pseudo-Boolean solvers.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:48:33 GMT"
}
] | 1,390,521,600,000 | [
[
"Abío",
"Ignasi",
""
],
[
"Nieuwenhuis",
"Robert",
""
],
[
"Oliveras",
"Albert",
""
],
[
"Rodriguez-Carbonell",
"Enric",
""
],
[
"Mayer-Eichberger",
"Valentin",
""
]
] |
1401.5861 | Carmel Domshlak | Carmel Domshlak, Erez Karpas, Shaul Markovitch | Online Speedup Learning for Optimal Planning | null | Journal Of Artificial Intelligence Research, Volume 44, pages
709-755, 2012 | 10.1613/jair.3676 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Domain-independent planning is one of the foundational areas in the field of
Artificial Intelligence. A description of a planning task consists of an
initial world state, a goal, and a set of actions for modifying the world
state. The objective is to find a sequence of actions, that is, a plan, that
transforms the initial world state into a goal state. In optimal planning, we
are interested in finding not just a plan, but one of the cheapest plans. A
prominent approach to optimal planning these days is heuristic state-space
search, guided by admissible heuristic functions. Numerous admissible
heuristics have been developed, each with its own strengths and weaknesses, and
it is well known that there is no single "best heuristic for optimal planning
in general. Thus, which heuristic to choose for a given planning task is a
difficult question. This difficulty can be avoided by combining several
heuristics, but that requires computing numerous heuristic estimates at each
state, and the tradeoff between the time spent doing so and the time saved by
the combined advantages of the different heuristics might be high. We present a
novel method that reduces the cost of combining admissible heuristics for
optimal planning, while maintaining its benefits. Using an idealized search
space model, we formulate a decision rule for choosing the best heuristic to
compute at each state. We then present an active online learning approach for
learning a classifier with that decision rule as the target concept, and employ
the learned classifier to decide which heuristic to compute at each state. We
evaluate this technique empirically, and show that it substantially outperforms
the standard method for combining several heuristics via their pointwise
maximum.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 02:49:53 GMT"
}
] | 1,390,521,600,000 | [
[
"Domshlak",
"Carmel",
""
],
[
"Karpas",
"Erez",
""
],
[
"Markovitch",
"Shaul",
""
]
] |
1401.5869 | Zizhen Zhang | Zizhen Zhang, Hu Qin, Xiaocong Liang, Andrew Lim | An Enhanced Branch-and-bound Algorithm for the Talent Scheduling Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The talent scheduling problem is a simplified version of the real-world film
shooting problem, which aims to determine a shooting sequence so as to minimize
the total cost of the actors involved. In this article, we first formulate the
problem as an integer linear programming model. Next, we devise a
branch-and-bound algorithm to solve the problem. The branch-and-bound algorithm
is enhanced by several accelerating techniques, including preprocessing,
dominance rules and caching search states. Extensive experiments over two sets
of benchmark instances suggest that our algorithm is superior to the current
best exact algorithm. Finally, the impacts of different parameter settings are
disclosed by some additional experiments.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 04:09:45 GMT"
}
] | 1,390,521,600,000 | [
[
"Zhang",
"Zizhen",
""
],
[
"Qin",
"Hu",
""
],
[
"Liang",
"Xiaocong",
""
],
[
"Lim",
"Andrew",
""
]
] |
1401.6048 | Ronen I. Brafman | Ronen I. Brafman, Guy Shani | Replanning in Domains with Partial Information and Sensing Actions | null | Journal Of Artificial Intelligence Research, Volume 45, pages
565-600, 2012 | 10.1613/jair.3711 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Replanning via determinization is a recent, popular approach for online
planning in MDPs. In this paper we adapt this idea to classical, non-stochastic
domains with partial information and sensing actions, presenting a new planner:
SDR (Sample, Determinize, Replan). At each step we generate a solution plan to
a classical planning problem induced by the original problem. We execute this
plan as long as it is safe to do so. When this is no longer the case, we
replan. The classical planning problem we generate is based on the
translation-based approach for conformant planning introduced by Palacios and
Geffner. The state of the classical planning problem generated in this approach
captures the belief state of the agent in the original problem. Unfortunately,
when this method is applied to planning problems with sensing, it yields a
non-deterministic planning problem that is typically very large. Our main
contribution is the introduction of state sampling techniques for overcoming
these two problems. In addition, we introduce a novel, lazy, regression-based
method for querying the agents belief state during run-time. We provide a
comprehensive experimental evaluation of the planner, showing that it scales
better than the state-of-the-art CLG planner on existing benchmark problems,
but also highlighting its weaknesses with new domains. We also discuss its
theoretical guarantees.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 16:44:51 GMT"
}
] | 1,390,521,600,000 | [
[
"Brafman",
"Ronen I.",
""
],
[
"Shani",
"Guy",
""
]
] |
1401.6049 | Richard Hoshino | Richard Hoshino, Ken-ichi Kawarabayashi | Generating Approximate Solutions to the TTP using a Linear Distance
Relaxation | null | Journal Of Artificial Intelligence Research, Volume 45, pages
257-286, 2012 | 10.1613/jair.3713 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In some domestic professional sports leagues, the home stadiums are located
in cities connected by a common train line running in one direction. For these
instances, we can incorporate this geographical information to determine
optimal or nearly-optimal solutions to the n-team Traveling Tournament Problem
(TTP), an NP-hard sports scheduling problem whose solution is a double
round-robin tournament schedule that minimizes the sum total of distances
traveled by all n teams. We introduce the Linear Distance Traveling Tournament
Problem (LD-TTP), and solve it for n=4 and n=6, generating the complete set of
possible solutions through elementary combinatorial techniques. For larger n,
we propose a novel "expander construction" that generates an approximate
solution to the LD-TTP. For n congruent to 4 modulo 6, we show that our
expander construction produces a feasible double round-robin tournament
schedule whose total distance is guaranteed to be no worse than 4/3 times the
optimal solution, regardless of where the n teams are located. This
4/3-approximation for the LD-TTP is stronger than the currently best-known
ratio of 5/3 + epsilon for the general TTP. We conclude the paper by applying
this linear distance relaxation to general (non-linear) n-team TTP instances,
where we develop fast approximate solutions by simply "assuming" the n teams
lie on a straight line and solving the modified problem. We show that this
technique surprisingly generates the distance-optimal tournament on all
benchmark sets on 6 teams, as well as close-to-optimal schedules for larger n,
even when the teams are located around a circle or positioned in
three-dimensional space.
| [
{
"version": "v1",
"created": "Thu, 23 Jan 2014 16:45:07 GMT"
}
] | 1,390,521,600,000 | [
[
"Hoshino",
"Richard",
""
],
[
"Kawarabayashi",
"Ken-ichi",
""
]
] |
1401.7249 | Atif Khan | Atif Ali Khan, Oumair Naseer, Daciana Iliescu, Evor Hines | Fuzzy Controller Design for Assisted Omni-Directional Treadmill Therapy | Presented at: "The International Conference on Soft Computing and
Software Engineering (SCSE 2013)" at San Francisco State University at
Downtown Campus, in San Francisco, California, USA, March 1-2, 2013 | The International Journal of Soft Computing and Software
Engineering [JSCSE], Vol. 3, No. 3, pp. 30-37, 2013 | 10.7321/jscse.v3.n3.8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the defining characteristic of human being is their ability to walk
upright. Loss or restriction of such ability whether due to the accident, spine
problem, stroke or other neurological injuries can cause tremendous stress on
the patients and hence will contribute negatively to their quality of life.
Modern research shows that physical exercise is very important for maintaining
physical fitness and adopting a healthier life style. In modern days treadmill
is widely used for physical exercises and training which enables the user to
set up an exercise regime that can be adhered to irrespective of the weather
conditions. Among the users of treadmills today are medical facilities such as
hospitals, rehabilitation centres, medical and physiotherapy clinics etc. The
process of assisted training or doing rehabilitation exercise through treadmill
is referred to as treadmill therapy. A modern treadmill is an automated machine
having built in functions and predefined features. Most of the treadmills used
today are one dimensional and user can only walk in one direction. This paper
presents the idea of using omnidirectional treadmills which will be more
appealing to the patients as they can walk in any direction, hence encouraging
them to do exercises more frequently. This paper proposes a fuzzy control
design and possible implementation strategy to assist patients in treadmill
therapy. By intelligently controlling the safety belt attached to the treadmill
user, one can help them steering left, right or in any direction. The use of
intelligent treadmill therapy can help patients to improve their walking
ability without being continuously supervised by the specialists. The patients
can walk freely within a limited space and the support system will provide
continuous evaluation of their position and can adjust the control parameters
of treadmill accordingly to provide best possible assistance.
| [
{
"version": "v1",
"created": "Fri, 24 Jan 2014 14:25:53 GMT"
}
] | 1,390,953,600,000 | [
[
"Khan",
"Atif Ali",
""
],
[
"Naseer",
"Oumair",
""
],
[
"Iliescu",
"Daciana",
""
],
[
"Hines",
"Evor",
""
]
] |
1401.7463 | Pierre Flener | Pierre Flener and Justin Pearson | Propagators and Violation Functions for Geometric and Workload
Constraints Arising in Airspace Sectorisation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Airspace sectorisation provides a partition of a given airspace into sectors,
subject to geometric constraints and workload constraints, so that some cost
metric is minimised. We make a study of the constraints that arise in airspace
sectorisation. For each constraint, we give an analysis of what algorithms and
properties are required under systematic search and stochastic local search.
| [
{
"version": "v1",
"created": "Wed, 29 Jan 2014 10:36:39 GMT"
}
] | 1,391,040,000,000 | [
[
"Flener",
"Pierre",
""
],
[
"Pearson",
"Justin",
""
]
] |
1401.7941 | Stefano Albrecht | Stefano V. Albrecht, Subramanian Ramamoorthy | Exploiting Causality for Selective Belief Filtering in Dynamic Bayesian
Networks | 44 pages; final manuscript published in Journal of Artificial
Intelligence Research (JAIR) | null | 10.1613/jair.5044 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dynamic Bayesian networks (DBNs) are a general model for stochastic processes
with partially observed states. Belief filtering in DBNs is the task of
inferring the belief state (i.e. the probability distribution over process
states) based on incomplete and noisy observations. This can be a hard problem
in complex processes with large state spaces. In this article, we explore the
idea of accelerating the filtering task by automatically exploiting causality
in the process. We consider a specific type of causal relation, called
passivity, which pertains to how state variables cause changes in other
variables. We present the Passivity-based Selective Belief Filtering (PSBF)
method, which maintains a factored belief representation and exploits passivity
to perform selective updates over the belief factors. PSBF produces exact
belief states under certain assumptions and approximate belief states
otherwise, where the approximation error is bounded by the degree of
uncertainty in the process. We show empirically, in synthetic processes with
varying sizes and degrees of passivity, that PSBF is faster than several
alternative methods while achieving competitive accuracy. Furthermore, we
demonstrate how passivity occurs naturally in a complex system such as a
multi-robot warehouse, and how PSBF can exploit this to accelerate the
filtering task.
| [
{
"version": "v1",
"created": "Thu, 30 Jan 2014 18:05:48 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 14:54:34 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Apr 2016 17:51:09 GMT"
}
] | 1,461,628,800,000 | [
[
"Albrecht",
"Stefano V.",
""
],
[
"Ramamoorthy",
"Subramanian",
""
]
] |
1401.8175 | Toshio Suzuki | Toshio Suzuki and Yoshinao Niida | Equilibrium Points of an AND-OR Tree: under Constraints on Probability | 13 pages, 3 figures | ANN PURE APPL LOGIC 166, pp. 1150--1164 (2015) | 10.1016/j.apal.2015.07.002 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a probability distribution d on the truth assignments to a uniform
binary AND-OR tree. Liu and Tanaka [2007, Inform. Process. Lett.] showed the
following: If d achieves the equilibrium among independent distributions (ID)
then d is an independent identical distribution (IID). We show a stronger form
of the above result. Given a real number r such that 0 < r < 1, we consider a
constraint that the probability of the root node having the value 0 is r. Our
main result is the following: When we restrict ourselves to IDs satisfying this
constraint, the above result of Liu and Tanaka still holds. The proof employs
clever tricks of induction. In particular, we show two fundamental
relationships between expected cost and probability in an IID on an OR-AND
tree: (1) The ratio of the cost to the probability (of the root having the
value 0) is a decreasing function of the probability x of the leaf. (2) The
ratio of derivative of the cost to the derivative of the probability is a
decreasing function of x, too.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2014 14:22:05 GMT"
},
{
"version": "v2",
"created": "Thu, 10 Jul 2014 09:43:16 GMT"
},
{
"version": "v3",
"created": "Wed, 4 Mar 2015 11:56:18 GMT"
}
] | 1,446,681,600,000 | [
[
"Suzuki",
"Toshio",
""
],
[
"Niida",
"Yoshinao",
""
]
] |
1402.0559 | Peter Nightingale | Peter Nightingale, Ian Philip Gent, Christopher Jefferson, Ian Miguel | Short and Long Supports for Constraint Propagation | null | Journal Of Artificial Intelligence Research, Volume 46, pages
1-45, 2013 | 10.1613/jair.3749 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Special-purpose constraint propagation algorithms frequently make implicit
use of short supports -- by examining a subset of the variables, they can infer
support (a justification that a variable-value pair may still form part of an
assignment that satisfies the constraint) for all other variables and values
and save substantial work -- but short supports have not been studied in their
own right. The two main contributions of this paper are the identification of
short supports as important for constraint propagation, and the introduction of
HaggisGAC, an efficient and effective general purpose propagation algorithm for
exploiting short supports. Given the complexity of HaggisGAC, we present it as
an optimised version of a simpler algorithm ShortGAC. Although experiments
demonstrate the efficiency of ShortGAC compared with other general-purpose
propagation algorithms where a compact set of short supports is available, we
show theoretically and experimentally that HaggisGAC is even better. We also
find that HaggisGAC performs better than GAC-Schema on full-length supports. We
also introduce a variant algorithm HaggisGAC-Stable, which is adapted to avoid
work on backtracking and in some cases can be faster and have significant
reductions in memory use. All the proposed algorithms are excellent for
propagating disjunctions of constraints. In all experiments with disjunctions
we found our algorithms to be faster than Constructive Or and GAC-Schema by at
least an order of magnitude, and up to three orders of magnitude.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:34:04 GMT"
}
] | 1,391,558,400,000 | [
[
"Nightingale",
"Peter",
""
],
[
"Gent",
"Ian Philip",
""
],
[
"Jefferson",
"Christopher",
""
],
[
"Miguel",
"Ian",
""
]
] |
1402.0561 | Gert de Cooman | Gert de Cooman, Enrique Miranda | Irrelevant and independent natural extension for sets of desirable
gambles | null | Journal Of Artificial Intelligence Research, Volume 45, pages
601-640, 2012 | 10.1613/jair.3770 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The results in this paper add useful tools to the theory of sets of desirable
gambles, a growing toolbox for reasoning with partial probability assessments.
We investigate how to combine a number of marginal coherent sets of desirable
gambles into a joint set using the properties of epistemic irrelevance and
independence. We provide formulas for the smallest such joint, called their
independent natural extension, and study its main properties. The independent
natural extension of maximal coherent sets of desirable gambles allows us to
define the strong product of sets of desirable gambles. Finally, we explore an
easy way to generalise these results to also apply for the conditional versions
of epistemic irrelevance and independence. Having such a set of tools that are
easily implemented in computer programs is clearly beneficial to fields, like
AI, with a clear interest in coherent reasoning under uncertainty using general
and robust uncertainty models that require no full specification.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:34:40 GMT"
}
] | 1,391,558,400,000 | [
[
"de Cooman",
"Gert",
""
],
[
"Miranda",
"Enrique",
""
]
] |
1402.0564 | Amanda Jane Coles | Amanda Jane Coles, Andrew Ian Coles, Maria Fox, Derek Long | A Hybrid LP-RPG Heuristic for Modelling Numeric Resource Flows in
Planning | null | Journal Of Artificial Intelligence Research, Volume 46, pages
343-412, 2013 | 10.1613/jair.3788 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although the use of metric fluents is fundamental to many practical planning
problems, the study of heuristics to support fully automated planners working
with these fluents remains relatively unexplored. The most widely used
heuristic is the relaxation of metric fluents into interval-valued variables
--- an idea first proposed a decade ago. Other heuristics depend on domain
encodings that supply additional information about fluents, such as capacity
constraints or other resource-related annotations. A particular challenge to
these approaches is in handling interactions between metric fluents that
represent exchange, such as the transformation of quantities of raw materials
into quantities of processed goods, or trading of money for materials. The
usual relaxation of metric fluents is often very poor in these situations,
since it does not recognise that resources, once spent, are no longer available
to be spent again. We present a heuristic for numeric planning problems
building on the propositional relaxed planning graph, but using a mathematical
program for numeric reasoning. We define a class of producer--consumer planning
problems and demonstrate how the numeric constraints in these can be modelled
in a mixed integer program (MIP). This MIP is then combined with a metric
Relaxed Planning Graph (RPG) heuristic to produce an integrated hybrid
heuristic. The MIP tracks resource use more accurately than the usual
relaxation, but relaxes the ordering of actions, while the RPG captures the
causal propositional aspects of the problem. We discuss how these two
components interact to produce a single unified heuristic and go on to explore
how further numeric features of planning problems can be integrated into the
MIP. We show that encoding a limited subset of the propositional problem to
augment the MIP can yield more accurate guidance, partly by exploiting
structure such as propositional landmarks and propositional resources. Our
results show that the use of this heuristic enhances scalability on problems
where numeric resource interaction is key in finding a solution.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:35:19 GMT"
}
] | 1,391,558,400,000 | [
[
"Coles",
"Amanda Jane",
""
],
[
"Coles",
"Andrew Ian",
""
],
[
"Fox",
"Maria",
""
],
[
"Long",
"Derek",
""
]
] |
1402.0565 | Nima Taghipour | Nima Taghipour, Daan Fierens, Jesse Davis, Hendrik Blockeel | Lifted Variable Elimination: Decoupling the Operators from the
Constraint Language | null | Journal Of Artificial Intelligence Research, Volume 47, pages
393-439, 2013 | 10.1613/jair.3793 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lifted probabilistic inference algorithms exploit regularities in the
structure of graphical models to perform inference more efficiently. More
specifically, they identify groups of interchangeable variables and perform
inference once per group, as opposed to once per variable. The groups are
defined by means of constraints, so the flexibility of the grouping is
determined by the expressivity of the constraint language. Existing approaches
for exact lifted inference use specific languages for (in)equality constraints,
which often have limited expressivity. In this article, we decouple lifted
inference from the constraint language. We define operators for lifted
inference in terms of relational algebra operators, so that they operate on the
semantic level (the constraints extension) rather than on the syntactic level,
making them language-independent. As a result, lifted inference can be
performed using more powerful constraint languages, which provide more
opportunities for lifting. We empirically demonstrate that this can improve
inference efficiency by orders of magnitude, allowing exact inference where
until now only approximate inference was feasible.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:35:39 GMT"
}
] | 1,391,558,400,000 | [
[
"Taghipour",
"Nima",
""
],
[
"Fierens",
"Daan",
""
],
[
"Davis",
"Jesse",
""
],
[
"Blockeel",
"Hendrik",
""
]
] |
1402.0566 | Frans Adriaan Oliehoek | Frans Adriaan Oliehoek, Matthijs T.J. Spaan, Christopher Amato, Shimon
Whiteson | Incremental Clustering and Expansion for Faster Optimal Planning in
Dec-POMDPs | null | Journal Of Artificial Intelligence Research, Volume 46, pages
449-509, 2013 | 10.1613/jair.3804 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents the state-of-the-art in optimal solution methods for
decentralized partially observable Markov decision processes (Dec-POMDPs),
which are general models for collaborative multiagent planning under
uncertainty. Building off the generalized multiagent A* (GMAA*) algorithm,
which reduces the problem to a tree of one-shot collaborative Bayesian games
(CBGs), we describe several advances that greatly expand the range of
Dec-POMDPs that can be solved optimally. First, we introduce lossless
incremental clustering of the CBGs solved by GMAA*, which achieves exponential
speedups without sacrificing optimality. Second, we introduce incremental
expansion of nodes in the GMAA* search tree, which avoids the need to expand
all children, the number of which is in the worst case doubly exponential in
the nodes depth. This is particularly beneficial when little clustering is
possible. In addition, we introduce new hybrid heuristic representations that
are more compact and thereby enable the solution of larger Dec-POMDPs. We
provide theoretical guarantees that, when a suitable heuristic is used, both
incremental clustering and incremental expansion yield algorithms that are both
complete and search equivalent. Finally, we present extensive empirical results
demonstrating that GMAA*-ICE, an algorithm that synthesizes these advances, can
optimally solve Dec-POMDPs of unprecedented size.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:35:59 GMT"
}
] | 1,391,558,400,000 | [
[
"Oliehoek",
"Frans Adriaan",
""
],
[
"Spaan",
"Matthijs T. J.",
""
],
[
"Amato",
"Christopher",
""
],
[
"Whiteson",
"Shimon",
""
]
] |
1402.0568 | Amit Metodi | Amit Metodi, Michael Codish, Peter James Stuckey | Boolean Equi-propagation for Concise and Efficient SAT Encodings of
Combinatorial Problems | arXiv admin note: text overlap with arXiv:1206.3883 | Journal Of Artificial Intelligence Research, Volume 46, pages
303-341, 2013 | 10.1613/jair.3809 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to propagation-based SAT encoding of combinatorial
problems, Boolean equi-propagation, where constraints are modeled as Boolean
functions which propagate information about equalities between Boolean
literals. This information is then applied to simplify the CNF encoding of the
constraints. A key factor is that considering only a small fragment of a
constraint model at one time enables us to apply stronger, and even complete,
reasoning to detect equivalent literals in that fragment. Once detected,
equivalences apply to simplify the entire constraint model and facilitate
further reasoning on other fragments. Equi-propagation in combination with
partial evaluation and constraint simplification provide the foundation for a
powerful approach to SAT-based finite domain constraint solving. We introduce a
tool called BEE (Ben-Gurion Equi-propagation Encoder) based on these ideas and
demonstrate for a variety of benchmarks that our approach leads to a
considerable reduction in the size of CNF encodings and subsequent speed-ups in
SAT solving times.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:36:36 GMT"
}
] | 1,391,558,400,000 | [
[
"Metodi",
"Amit",
""
],
[
"Codish",
"Michael",
""
],
[
"Stuckey",
"Peter James",
""
]
] |
1402.0571 | Gerald Tesauro | Gerald Tesauro, David C. Gondek, Jonathan Lenchner, James Fan, John M.
Prager | Analysis of Watson's Strategies for Playing Jeopardy! | null | Journal Of Artificial Intelligence Research, Volume 47, pages
205-251, 2013 | 10.1613/jair.3834 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Major advances in Question Answering technology were needed for IBM Watson to
play Jeopardy! at championship level -- the show requires rapid-fire answers to
challenging natural language questions, broad general knowledge, high
precision, and accurate confidence estimates. In addition, Jeopardy! features
four types of decision making carrying great strategic importance: (1) Daily
Double wagering; (2) Final Jeopardy wagering; (3) selecting the next square
when in control of the board; (4) deciding whether to attempt to answer, i.e.,
"buzz in." Using sophisticated strategies for these decisions, that properly
account for the game state and future event probabilities, can significantly
boost a players overall chances to win, when compared with simple "rule of
thumb" strategies. This article presents our approach to developing Watsons
game-playing strategies, comprising development of a faithful simulation model,
and then using learning and Monte-Carlo methods within the simulator to
optimize Watsons strategic decision-making. After giving a detailed description
of each of our game-strategy algorithms, we then focus in particular on
validating the accuracy of the simulators predictions, and documenting
performance improvements using our methods. Quantitative performance benefits
are shown with respect to both simple heuristic strategies, and actual human
contestant performance in historical episodes. We further extend our analysis
of human play to derive a number of valuable and counterintuitive examples
illustrating how human contestants may improve their performance on the show.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:37:44 GMT"
}
] | 1,391,558,400,000 | [
[
"Tesauro",
"Gerald",
""
],
[
"Gondek",
"David C.",
""
],
[
"Lenchner",
"Jonathan",
""
],
[
"Fan",
"James",
""
],
[
"Prager",
"John M.",
""
]
] |
1402.0573 | Srdjan Vesic | Srdjan Vesic | Identifying the Class of Maxi-Consistent Operators in Argumentation | null | Journal Of Artificial Intelligence Research, Volume 47, pages
71-93, 2013 | 10.1613/jair.3860 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dung's abstract argumentation theory can be seen as a general framework for
non-monotonic reasoning. An important question is then: what is the class of
logics that can be subsumed as instantiations of this theory? The goal of this
paper is to identify and study the large class of logic-based instantiations of
Dung's theory which correspond to the maxi-consistent operator, i.e. to the
function which returns maximal consistent subsets of an inconsistent knowledge
base. In other words, we study the class of instantiations where very extension
of the argumentation system corresponds to exactly one maximal consistent
subset of the knowledge base. We show that an attack relation belonging to this
class must be conflict-dependent, must not be valid, must not be
conflict-complete, must not be symmetric etc. Then, we show that some attack
relations serve as lower or upper bounds of the class (e.g. if an attack
relation contains canonical undercut then it is not a member of this class). By
using our results, we show for all existing attack relations whether or not
they belong to this class. We also define new attack relations which are
members of this class. Finally, we interpret our results and discuss more
general questions, like: what is the added value of argumentation in such a
setting? We believe that this work is a first step towards achieving our
long-term goal, which is to better understand the role of argumentation and,
particularly, the expressivity of logic-based instantiations of Dung-style
argumentation frameworks.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:38:48 GMT"
}
] | 1,391,558,400,000 | [
[
"Vesic",
"Srdjan",
""
]
] |
1402.0579 | Masahiro Ono | Masahiro Ono, Brian C. Williams, L. Blackmore | Probabilistic Planning for Continuous Dynamic Systems under Bounded Risk | null | Journal Of Artificial Intelligence Research, Volume 46, pages
511-577, 2013 | 10.1613/jair.3893 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a model-based planner called the Probabilistic Sulu
Planner or the p-Sulu Planner, which controls stochastic systems in a goal
directed manner within user-specified risk bounds. The objective of the p-Sulu
Planner is to allow users to command continuous, stochastic systems, such as
unmanned aerial and space vehicles, in a manner that is both intuitive and
safe. To this end, we first develop a new plan representation called a
chance-constrained qualitative state plan (CCQSP), through which users can
specify the desired evolution of the plant state as well as the acceptable
level of risk. An example of a CCQSP statement is go to A through B within 30
minutes, with less than 0.001% probability of failure." We then develop the
p-Sulu Planner, which can tractably solve a CCQSP planning problem. In order to
enable CCQSP planning, we develop the following two capabilities in this paper:
1) risk-sensitive planning with risk bounds, and 2) goal-directed planning in a
continuous domain with temporal constraints. The first capability is to ensures
that the probability of failure is bounded. The second capability is essential
for the planner to solve problems with a continuous state space such as vehicle
path planning. We demonstrate the capabilities of the p-Sulu Planner by
simulations on two real-world scenarios: the path planning and scheduling of a
personal aerial vehicle as well as the space rendezvous of an autonomous cargo
spacecraft.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:41:20 GMT"
}
] | 1,391,558,400,000 | [
[
"Ono",
"Masahiro",
""
],
[
"Williams",
"Brian C.",
""
],
[
"Blackmore",
"L.",
""
]
] |
1402.0581 | Neal Andrew Snooke | Neal Andrew Snooke, Mark H Lee | Qualitative Order of Magnitude Energy-Flow-Based Failure Modes and
Effects Analysis | null | Journal Of Artificial Intelligence Research, Volume 46, pages
413-447, 2013 | 10.1613/jair.3898 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a structured power and energy-flow-based qualitative
modelling approach that is applicable to a variety of system types including
electrical and fluid flow. The modelling is split into two parts. Power flow is
a global phenomenon and is therefore naturally represented and analysed by a
network comprised of the relevant structural elements from the components of a
system. The power flow analysis is a platform for higher-level behaviour
prediction of energy related aspects using local component behaviour models to
capture a state-based representation with a global time. The primary
application is Failure Modes and Effects Analysis (FMEA) and a form of
exaggeration reasoning is used, combined with an order of magnitude
representation to derive the worst case failure modes. The novel aspects of the
work are an order of magnitude(OM) qualitative network analyser to represent
any power domain and topology, including multiple power sources, a feature that
was not required for earlier specialised electrical versions of the approach.
Secondly, the representation of generalised energy related behaviour as
state-based local models is presented as a modelling strategy that can be more
vivid and intuitive for a range of topologically complex applications than
qualitative equation-based representations.The two-level modelling strategy
allows the broad system behaviour coverage of qualitative simulation to be
exploited for the FMEA task, while limiting the difficulties of qualitative
ambiguity explanation that can arise from abstracted numerical models. We have
used the method to support an automated FMEA system with examples of an
aircraft fuel system and domestic a heating system discussed in this paper.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:41:55 GMT"
}
] | 1,391,558,400,000 | [
[
"Snooke",
"Neal Andrew",
""
],
[
"Lee",
"Mark H",
""
]
] |
1402.0582 | Maliheh Aramon Bajestani | Maliheh Aramon Bajestani, J. Christopher Beck | Scheduling a Dynamic Aircraft Repair Shop with Limited Repair Resources | null | Journal Of Artificial Intelligence Research, Volume 47, pages
35-70, 2013 | 10.1613/jair.3902 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address a dynamic repair shop scheduling problem in the context of
military aircraft fleet management where the goal is to maintain a full
complement of aircraft over the long-term. A number of flights, each with a
requirement for a specific number and type of aircraft, are already scheduled
over a long horizon. We need to assign aircraft to flights and schedule repair
activities while considering the flights requirements, repair capacity, and
aircraft failures. The number of aircraft awaiting repair dynamically changes
over time due to failures and it is therefore necessary to rebuild the repair
schedule online. To solve the problem, we view the dynamic repair shop as
successive static repair scheduling sub-problems over shorter time periods. We
propose a complete approach based on the logic-based Benders decomposition to
solve the static sub-problems, and design different rescheduling policies to
schedule the dynamic repair shop. Computational experiments demonstrate that
the Benders model is able to find and prove optimal solutions on average four
times faster than a mixed integer programming model. The rescheduling approach
having both aspects of scheduling over a longer horizon and quickly adjusting
the schedule increases aircraft available in the long term by 10% compared to
the approaches having either one of the aspects alone.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:42:10 GMT"
}
] | 1,391,558,400,000 | [
[
"Bajestani",
"Maliheh Aramon",
""
],
[
"Beck",
"J. Christopher",
""
]
] |
1402.0585 | Jose David Fernandez | Jose David Fernandez, Francisco Vico | AI Methods in Algorithmic Composition: A Comprehensive Survey | null | Journal Of Artificial Intelligence Research, Volume 48, pages
513-582, 2013 | 10.1613/jair.3908 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithmic composition is the partial or total automation of the process of
music composition by using computers. Since the 1950s, different computational
techniques related to Artificial Intelligence have been used for algorithmic
composition, including grammatical representations, probabilistic methods,
neural networks, symbolic rule-based systems, constraint programming and
evolutionary algorithms. This survey aims to be a comprehensive account of
research on algorithmic composition, presenting a thorough view of the field
for researchers in Artificial Intelligence.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:43:06 GMT"
}
] | 1,391,558,400,000 | [
[
"Fernandez",
"Jose David",
""
],
[
"Vico",
"Francisco",
""
]
] |
1402.0587 | Tal Grinshpoun | Tal Grinshpoun, Alon Grubshtein, Roie Zivan, Arnon Netzer, Amnon
Meisels | Asymmetric Distributed Constraint Optimization Problems | null | Journal Of Artificial Intelligence Research, Volume 47, pages
613-647, 2013 | 10.1613/jair.3945 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distributed Constraint Optimization (DCOP) is a powerful framework for
representing and solving distributed combinatorial problems, where the
variables of the problem are owned by different agents. Many multi-agent
problems include constraints that produce different gains (or costs) for the
participating agents. Asymmetric gains of constrained agents cannot be
naturally represented by the standard DCOP model. The present paper proposes a
general framework for Asymmetric DCOPs (ADCOPs). In ADCOPs different agents may
have different valuations for constraints that they are involved in. The new
framework bridges the gap between multi-agent problems which tend to have
asymmetric structure and the standard symmetric DCOP model. The benefits of the
proposed model over previous attempts to generalize the DCOP model are
discussed and evaluated. Innovative algorithms that apply to the special
properties of the proposed ADCOP model are presented in detail. These include
complete algorithms that have a substantial advantage in terms of runtime and
network load over existing algorithms (for standard DCOPs) which use
alternative representations. Moreover, standard incomplete algorithms (i.e.,
local search algorithms) are inapplicable to the existing DCOP representations
of asymmetric constraints and when they are applied to the new ADCOP framework
they often fail to converge to a local optimum and yield poor results. The
local search algorithms proposed in the present paper converge to high quality
solutions. The experimental evidence that is presented reveals that the
proposed local search algorithms for ADCOPs achieve high quality solutions
while preserving a high level of privacy.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:43:59 GMT"
}
] | 1,391,558,400,000 | [
[
"Grinshpoun",
"Tal",
""
],
[
"Grubshtein",
"Alon",
""
],
[
"Zivan",
"Roie",
""
],
[
"Netzer",
"Arnon",
""
],
[
"Meisels",
"Amnon",
""
]
] |
1402.0590 | Diederik Marijn Roijers | Diederik Marijn Roijers, Peter Vamplew, Shimon Whiteson, Richard
Dazeley | A Survey of Multi-Objective Sequential Decision-Making | null | Journal Of Artificial Intelligence Research, Volume 48, pages
67-113, 2013 | 10.1613/jair.3987 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential decision-making problems with multiple objectives arise naturally
in practice and pose unique challenges for research in decision-theoretic
planning and learning, which has largely focused on single-objective settings.
This article surveys algorithms designed for sequential decision-making
problems with multiple objectives. Though there is a growing body of literature
on this subject, little of it makes explicit under what circumstances special
methods are needed to solve multi-objective problems. Therefore, we identify
three distinct scenarios in which converting such a problem to a
single-objective one is impossible, infeasible, or undesirable. Furthermore, we
propose a taxonomy that classifies multi-objective methods according to the
applicable scenario, the nature of the scalarization function (which projects
multi-objective values to scalar ones), and the type of policies considered. We
show how these factors determine the nature of an optimal solution, which can
be a single policy, a convex hull, or a Pareto front. Using this taxonomy, we
survey the literature on multi-objective methods for planning and learning.
Finally, we discuss key applications of such methods and outline opportunities
for future work.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2014 01:45:08 GMT"
}
] | 1,391,558,400,000 | [
[
"Roijers",
"Diederik Marijn",
""
],
[
"Vamplew",
"Peter",
""
],
[
"Whiteson",
"Shimon",
""
],
[
"Dazeley",
"Richard",
""
]
] |
1402.1361 | Jean-Guillaume Fages | Jean-Guillaume Fages, Gilles Chabert and Charles Prud'homme | Combining finite and continuous solvers | Presented at Workshop TRICS in conference CP'13 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Combining efficiency with reliability within CP systems is one of the main
concerns of CP developers. This paper presents a simple and efficient way to
connect Choco and Ibex, two CP solvers respectively specialised on finite and
continuous domains. This enables to take advantage of the most recent advances
of the continuous community within Choco while saving development and
maintenance resources, hence ensuring a better software quality.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2014 14:21:26 GMT"
}
] | 1,391,731,200,000 | [
[
"Fages",
"Jean-Guillaume",
""
],
[
"Chabert",
"Gilles",
""
],
[
"Prud'homme",
"Charles",
""
]
] |
1402.1500 | Eran Shaham Mr. | Eran Shaham, David Sarne, Boaz Ben-Moshe | Co-clustering of Fuzzy Lagged Data | Under consideration for publication in Knowledge and Information
Systems. The final publication is available at Springer via
http://dx.doi.org/10.1007/s10115-014-0758-7 | null | 10.1007/s10115-014-0758-7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper focuses on mining patterns that are characterized by a fuzzy lagged
relationship between the data objects forming them. Such a regulatory mechanism
is quite common in real life settings. It appears in a variety of fields:
finance, gene expression, neuroscience, crowds and collective movements are but
a limited list of examples. Mining such patterns not only helps in
understanding the relationship between objects in the domain, but assists in
forecasting their future behavior. For most interesting variants of this
problem, finding an optimal fuzzy lagged co-cluster is an NP-complete problem.
We thus present a polynomial-time Monte-Carlo approximation algorithm for
mining fuzzy lagged co-clusters. We prove that for any data matrix, the
algorithm mines a fuzzy lagged co-cluster with fixed probability, which
encompasses the optimal fuzzy lagged co-cluster by a maximum 2 ratio columns
overhead and completely no rows overhead. Moreover, the algorithm handles
noise, anti-correlations, missing values and overlapping patterns. The
algorithm was extensively evaluated using both artificial and real datasets.
The results not only corroborate the ability of the algorithm to efficiently
mine relevant and accurate fuzzy lagged co-clusters, but also illustrate the
importance of including the fuzziness in the lagged-pattern model.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2014 21:02:16 GMT"
},
{
"version": "v2",
"created": "Thu, 15 May 2014 12:01:08 GMT"
}
] | 1,400,198,400,000 | [
[
"Shaham",
"Eran",
""
],
[
"Sarne",
"David",
""
],
[
"Ben-Moshe",
"Boaz",
""
]
] |
1402.1956 | Lakhdar Sais | Said Jabbour and Jerry Lonlac and Lakhdar Sais and Yakoub Salhi | Revisiting the Learned Clauses Database Reduction Strategies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we revisit an important issue of CDCL-based SAT solvers,
namely the learned clauses database management policies. Our motivation takes
its source from a simple observation on the remarkable performances of both
random and size-bounded reduction strategies. We first derive a simple
reduction strategy, called Size-Bounded Randomized strategy (in short SBR),
that combines maintaing short clauses (of size bounded by k), while deleting
randomly clauses of size greater than k. The resulting strategy outperform the
state-of-the-art, namely the LBD based one, on SAT instances taken from the
last SAT competition. Reinforced by the interest of keeping short clauses, we
propose several new dynamic variants, and we discuss their performances.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2014 15:14:24 GMT"
}
] | 1,392,076,800,000 | [
[
"Jabbour",
"Said",
""
],
[
"Lonlac",
"Jerry",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1402.1986 | Djallel Bouneffouf | Djallel Bouneffouf | Recommandation mobile, sensible au contexte de contenus \'evolutifs:
Contextuel-E-Greedy | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce in this paper an algorithm named Contextuel-E-Greedy that
tackles the dynamicity of the user's content. It is based on dynamic
exploration/exploitation tradeoff and can adaptively balance the two aspects by
deciding which situation is most relevant for exploration or exploitation. The
experimental results demonstrate that our algorithm outperforms surveyed
algorithms.
| [
{
"version": "v1",
"created": "Sun, 9 Feb 2014 20:28:55 GMT"
}
] | 1,392,076,800,000 | [
[
"Bouneffouf",
"Djallel",
""
]
] |
1402.3490 | Xinyang Deng | Xinyang Deng, Yong Deng | D numbers theory: a generalization of Dempster-Shafer theory | This paper has been withdrawn by the authors due to a crucial error
of the combination rule | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dempster-Shafer theory is widely applied to uncertainty modelling and
knowledge reasoning due to its ability of expressing uncertain information.
However, some conditions, such as exclusiveness hypothesis and completeness
constraint, limit its development and application to a large extend. To
overcome these shortcomings in Dempster-Shafer theory and enhance its
capability of representing uncertain information, a novel theory called D
numbers theory is systematically proposed in this paper. Within the proposed
theory, uncertain information is expressed by D numbers, reasoning and
synthesization of information are implemented by D numbers combination rule.
The proposed D numbers theory is an generalization of Dempster-Shafer theory,
which inherits the advantage of Dempster-Shafer theory and strengthens its
capability of uncertainty modelling.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2014 15:15:26 GMT"
},
{
"version": "v2",
"created": "Mon, 12 May 2014 15:47:48 GMT"
}
] | 1,399,939,200,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Deng",
"Yong",
""
]
] |
1402.3664 | Xinyang Deng | Xinyang Deng, Yong Hu, Felix Chan, Sankaran Mahadevan, Yong Deng | Parameter estimation based on interval-valued belief structures | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parameter estimation based on uncertain data represented as belief structures
is one of the latest problems in the Dempster-Shafer theory. In this paper, a
novel method is proposed for the parameter estimation in the case where belief
structures are uncertain and represented as interval-valued belief structures.
Within our proposed method, the maximization of likelihood criterion and
minimization of estimated parameter's uncertainty are taken into consideration
simultaneously. As an illustration, the proposed method is employed to estimate
parameters for deterministic and uncertain belief structures, which
demonstrates its effectiveness and versatility.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2014 08:07:49 GMT"
}
] | 1,392,681,600,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Hu",
"Yong",
""
],
[
"Chan",
"Felix",
""
],
[
"Mahadevan",
"Sankaran",
""
],
[
"Deng",
"Yong",
""
]
] |
1402.4525 | Saminda Abeyruwan | Saminda Abeyruwan and Andreas Seekircher and Ubbo Visser | Off-Policy General Value Functions to Represent Dynamic Role Assignments
in RoboCup 3D Soccer Simulation | 18 pages, 8 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Collecting and maintaining accurate world knowledge in a dynamic, complex,
adversarial, and stochastic environment such as the RoboCup 3D Soccer
Simulation is a challenging task. Knowledge should be learned in real-time with
time constraints. We use recently introduced Off-Policy Gradient Descent
algorithms within Reinforcement Learning that illustrate learnable knowledge
representations for dynamic role assignments. The results show that the agents
have learned competitive policies against the top teams from the RoboCup 2012
competitions for three vs three, five vs five, and seven vs seven agents. We
have explicitly used subsets of agents to identify the dynamics and the
semantics for which the agents learn to maximize their performance measures,
and to gather knowledge about different objectives, so that all agents
participate effectively and efficiently within the group.
| [
{
"version": "v1",
"created": "Tue, 18 Feb 2014 23:01:13 GMT"
}
] | 1,392,854,400,000 | [
[
"Abeyruwan",
"Saminda",
""
],
[
"Seekircher",
"Andreas",
""
],
[
"Visser",
"Ubbo",
""
]
] |
1402.5037 | Lucas Paletta | Ian Dunwell, Panagiotis Petridis, Petros Lameras, Maurice Hendrix, and
Stella Doukianou, Mark Gaved | Assessing the Reach and Impact of Game-Based Learning Approaches to
Cultural Competency and Behavioural Change | null | null | null | IDGEI/2014/08 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As digital games continue to be explored as solutions to educational and
behavioural challenges, the need for evaluation methodologies which support
both the unique nature of the format and the need for comparison with other
approaches continues to increase. In this workshop paper, a range of challenges
are described related specifically to the case of cultural learning using
digital games, in terms of how it may best be assessed, understood, and
sustained through an iterative process supported by research. An evaluation
framework is proposed, identifying metrics for reach and impact and their
associated challenges, as well as presenting ethical considerations and the
means to utilize evaluation outcomes within an iterative cycle, and to provide
feedback to learners. Presenting as a case study a serious game from the Mobile
Assistance for Social Inclusion and Empowerment of Immigrants with Persuasive
Learning Technologies and Social Networks (MASELTOV) project, the use of the
framework in the context of an integrative project is discussed, with emphasis
on the need to view game-based learning as a blended component of the cultural
learning process, rather than a standalone solution. The particular case of
mobile gaming is also considered within this case study, providing a platform
by which to deliver and update content in response to evaluation outcomes.
Discussion reflects upon the general challenges related to the assessment of
cultural learning, and behavioural change in more general terms, suggesting
future work should address the need to provide sustainable, research-driven
platforms for game-based learning content.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2014 15:37:23 GMT"
}
] | 1,392,940,800,000 | [
[
"Dunwell",
"Ian",
""
],
[
"Petridis",
"Panagiotis",
""
],
[
"Lameras",
"Petros",
""
],
[
"Hendrix",
"Maurice",
""
],
[
"Doukianou",
"Stella",
""
],
[
"Gaved",
"Mark",
""
]
] |
1402.5043 | Lucas Paletta | Marwen Belkaid, Nicolas Sabouret | A logical model of Theory of Mind for virtual agents in the context of
job interview simulation | null | null | null | IDGEI/2014/10 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Job interview simulation with a virtual agents aims at improving people's
social skills and supporting professional inclusion. In such simulators, the
virtual agent must be capable of representing and reasoning about the user's
mental state based on social cues that inform the system about his/her affects
and social attitude. In this paper, we propose a formal model of Theory of Mind
(ToM) for virtual agent in the context of human-agent interaction that focuses
on the affective dimension. It relies on a hybrid ToM that combines the two
major paradigms of the domain. Our framework is based on modal logic and
inference rules about the mental states, emotions and social relations of both
actors. Finally, we present preliminary results regarding the impact of such a
model on natural interaction in the context of job interviews simulation.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2014 15:40:08 GMT"
}
] | 1,392,940,800,000 | [
[
"Belkaid",
"Marwen",
""
],
[
"Sabouret",
"Nicolas",
""
]
] |
1402.5358 | J\'anos P\'anovics | Tam\'as K\'adek and J\'anos P\'anovics | Extended Breadth-First Search Algorithm | 5 pages, 1 figure, 1 table | International Journal of Computer Science Issues, Volume 10, Issue
6, No 2, ISSN (Print): 1694-0814, ISSN (Online): 1694-0784, November 2013,
pp. 78-82 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The task of artificial intelligence is to provide representation techniques
for describing problems, as well as search algorithms that can be used to
answer our questions. A widespread and elaborated model is state-space
representation, which, however, has some shortcomings. Classical search
algorithms are not applicable in practice when the state space contains even
only a few tens of thousands of states. We can give remedy to this problem by
defining some kind of heuristic knowledge. In case of classical state-space
representation, heuristic must be defined so that it qualifies an arbitrary
state based on its "goodness," which is obviously not trivial. In our paper, we
introduce an algorithm that gives us the ability to handle huge state spaces
and to use a heuristic concept which is easier to embed into search algorithms.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2014 17:21:52 GMT"
}
] | 1,393,200,000,000 | [
[
"Kádek",
"Tamás",
""
],
[
"Pánovics",
"János",
""
]
] |
1402.5379 | Eray Ozkural | Eray \"Ozkural | What Is It Like to Be a Brain Simulation? | 10 pages, draft of conference paper published in AGI 2012, also
accepted to AISB 2012 but it was too late to arrange travel, unfortunately;
Artificial General Intelligence, 5th International Conference, AGI 2012,
Oxford, UK, December 8-11, 2012. Proceedings | null | 10.1007/978-3-642-35506-6_24 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We frame the question of what kind of subjective experience a brain
simulation would have in contrast to a biological brain. We discuss the brain
prosthesis thought experiment. We evaluate how the experience of the brain
simulation might differ from the biological, according to a number of
hypotheses about experience and the properties of simulation. Then, we identify
finer questions relating to the original inquiry, and answer them from both a
general physicalist, and panexperientialist perspective.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2014 17:19:53 GMT"
}
] | 1,393,200,000,000 | [
[
"Özkural",
"Eray",
""
]
] |
1402.5380 | Eray Ozkural | Eray \"Ozkural | Godseed: Benevolent or Malevolent? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is hypothesized by some thinkers that benign looking AI objectives may
result in powerful AI drives that may pose an existential risk to human
society. We analyze this scenario and find the underlying assumptions to be
unlikely. We examine the alternative scenario of what happens when universal
goals that are not human-centric are used for designing AI agents. We follow a
design approach that tries to exclude malevolent motivations from AI agents,
however, we see that objectives that seem benevolent may pose significant risk.
We consider the following meta-rules: preserve and pervade life and culture,
maximize the number of free minds, maximize intelligence, maximize wisdom,
maximize energy production, behave like human, seek pleasure, accelerate
evolution, survive, maximize control, and maximize capital. We also discuss
various solution approaches for benevolent behavior including selfless goals,
hybrid designs, Darwinism, universal constraints, semi-autonomy, and
generalization of robot laws. A "prime directive" for AI may help in
formulating an encompassing constraint for avoiding malicious behavior. We
hypothesize that social instincts for autonomous robots may be effective such
as attachment learning. We mention multiple beneficial scenarios for an
advanced semi-autonomous AGI agent in the near future including space
exploration, automation of industries, state functions, and cities. We conclude
that a beneficial AI agent with intelligence beyond human-level is possible and
has many practical use cases.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2014 17:35:53 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2016 21:43:51 GMT"
}
] | 1,476,230,400,000 | [
[
"Özkural",
"Eray",
""
]
] |
1402.5593 | Rustam Tagiew | Rustam Tagiew and Dmitry I. Ignatov | Reciprocity in Gift-Exchange-Games | 6 pages, 2 figures, 5 tables | Experimental Economics and Machine Learning 2016, CEUR-WS
Vol-1627, urn:nbn:de:0074-1627-1 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an analysis of data from a gift-exchange-game experiment.
The experiment was described in `The Impact of Social Comparisons on
Reciprocity' by G\"achter et al. 2012. Since this paper uses state-of-art data
science techniques, the results provide a different point of view on the
problem. As already shown in relevant literature from experimental economics,
human decisions deviate from rational payoff maximization. The average gift
rate was $31$%. Gift rate was under no conditions zero. Further, we derive some
special findings and calculate their significance.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2014 10:07:59 GMT"
}
] | 1,717,372,800,000 | [
[
"Tagiew",
"Rustam",
""
],
[
"Ignatov",
"Dmitry I.",
""
]
] |
1402.6560 | Jesus Cerquides | Jordi Roca-Lacostena, Jesus Cerquides | Even more generic solution construction in Valuation-Based Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Valuation algebras abstract a large number of formalisms for automated
reasoning and enable the definition of generic inference procedures. Many of
these formalisms provide some notions of solutions. Typical examples are
satisfying assignments in constraint systems, models in logics or solutions to
linear equation systems.
Recently, formal requirements for the presence of solutions and a generic
algorithm for solution construction based on the results of a previously
executed inference scheme have been proposed in the literature. Unfortunately,
the formalization of Pouly and Kohlas relies on a theorem for which we provide
a counter example. In spite of that, the mainline of the theory described is
correct, although some of the necessary conditions to apply some of the
algorithms have to be revised. To fix the theory, we generalize some of their
definitions and provide correct sufficient conditions for the algorithms. As a
result, we get a more general and corrected version of the already existing
theory.
| [
{
"version": "v1",
"created": "Wed, 26 Feb 2014 14:51:57 GMT"
}
] | 1,393,459,200,000 | [
[
"Roca-Lacostena",
"Jordi",
""
],
[
"Cerquides",
"Jesus",
""
]
] |
1403.0034 | Manfred Eppe | Manfred Eppe | Tractable Epistemic Reasoning with Functional Fluents, Static Causal
Laws and Postdiction | There are flaws in the mathematical background. The paper has been
reviewed at a conference and there are fundamental issues with the proposed
methodology that cannot be addressed with a simple correction notice | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an epistemic action theory for tractable epistemic reasoning as an
extension to the h-approximation (HPX) theory. In contrast to existing
tractable approaches, the theory supports functional fluents and postdictive
reasoning with static causal laws. We argue that this combination is
particularly synergistic because it allows one not only to perform direct
postdiction about the conditions of actions, but also indirect postdiction
about the conditions of static causal laws. We show that despite the richer
expressiveness, the temporal projection problem remains tractable (polynomial),
and therefore the planning problem remains in NP. We present the operational
semantics of our theory as well as its formulation as Answer Set Programming.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2014 00:39:26 GMT"
},
{
"version": "v2",
"created": "Sat, 22 Mar 2014 17:56:47 GMT"
},
{
"version": "v3",
"created": "Sun, 8 Dec 2019 13:36:33 GMT"
},
{
"version": "v4",
"created": "Wed, 25 Nov 2020 15:18:51 GMT"
}
] | 1,606,348,800,000 | [
[
"Eppe",
"Manfred",
""
]
] |
1403.0036 | Menghan Wang | Menghan Wang | Dynamic Decision Process Modeling and Relation-line Handling in
Distributed Cooperative Modeling System | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Distributed Cooperative Modeling System (DCMS) solves complex decision
problems involving a lot of participants with different viewpoints by network
based distributed modeling and multi-template aggregation.
This thesis aims at extending the system with support for dynamic decision
making process. First, the thesis presents a discussion of characteristics and
optimal policy finding Markov Decision Process as well as a brief introduction
to dynamic Bayesian decision network, which is inherently equal to MDP. After
that, discussion and implementation of prediction in Markov process for both
discrete and continuous random variable are given, as well as several different
kinds of correlation analysis among multiple indices which could help
decision-makers to realize the interaction of indices and design appropriate
policy.
Appending history data of Macau industry, as the foundation of extending
DCMS, is introduced. Additional works include rearrangement of graphical class
hierarchy in DCMS, which in turn allows convenient implementation of curve
relation-line, which makes template modeling clearer and friendlier.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2014 01:12:34 GMT"
}
] | 1,393,891,200,000 | [
[
"Wang",
"Menghan",
""
]
] |
1403.0522 | Ahmad Taher Azar Dr. | Ahmad Taher Azar, Aboul Ella Hassanien | Expert System Based On Neural-Fuzzy Rules for Thyroid Diseases Diagnosis | Conference Paper | null | 10.1007/978-3-642-35521-9_13 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The thyroid, an endocrine gland that secretes hormones in the blood,
circulates its products to all tissues of the body, where they control vital
functions in every cell. Normal levels of thyroid hormone help the brain,
heart, intestines, muscles and reproductive system function normally. Thyroid
hormones control the metabolism of the body. Abnormalities of thyroid function
are usually related to production of too little thyroid hormone
(hypothyroidism) or production of too much thyroid hormone (hyperthyroidism).
Therefore, the correct diagnosis of these diseases is very important topic. In
this study, Linguistic Hedges Neural-Fuzzy Classifier with Selected Features
(LHNFCSF) is presented for diagnosis of thyroid diseases. The performance
evaluation of this system is estimated by using classification accuracy and
k-fold cross-validation. The results indicated that the classification accuracy
without feature selection was 98.6047% and 97.6744% during training and testing
phases, respectively with RMSE of 0.02335. After applying feature selection
algorithm, LHNFCSF achieved 100% for all cluster sizes during training phase.
However, in the testing phase LHNFCSF achieved 88.3721% using one cluster for
each class, 90.6977% using two clusters, 91.8605% using three clusters and
97.6744% using four clusters for each class and 12 fuzzy rules. The obtained
classification accuracy was very promising with regard to the other
classification applications in literature for this problem.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 18:50:08 GMT"
}
] | 1,393,891,200,000 | [
[
"Azar",
"Ahmad Taher",
""
],
[
"Hassanien",
"Aboul Ella",
""
]
] |
1403.0613 | Zhiguo Long | Sanjiang Li, Zhiguo Long, Weiming Liu, Matt Duckham, Alan Both | On Redundant Topological Constraints | An extended abstract appears in Proceedings of the 14th International
Conference on the Principles of Knowledge Representation and Reasoning
(KR-14), Vienna, Austria, July 20-24, 2014 | Artificial Intelligence 225 (2015) 51-76 | 10.1016/j.artint.2015.03.010 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Region Connection Calculus (RCC) is a well-known calculus for
representing part-whole and topological relations. It plays an important role
in qualitative spatial reasoning, geographical information science, and
ontology. The computational complexity of reasoning with RCC5 and RCC8 (two
fragments of RCC) as well as other qualitative spatial/temporal calculi has
been investigated in depth in the literature. Most of these works focus on the
consistency of qualitative constraint networks. In this paper, we consider the
important problem of redundant qualitative constraints. For a set $\Gamma$ of
qualitative constraints, we say a constraint $(x R y)$ in $\Gamma$ is redundant
if it is entailed by the rest of $\Gamma$. A prime subnetwork of $\Gamma$ is a
subset of $\Gamma$ which contains no redundant constraints and has the same
solution set as $\Gamma$. It is natural to ask how to compute such a prime
subnetwork, and when it is unique.
In this paper, we show that this problem is in general intractable, but
becomes tractable if $\Gamma$ is over a tractable subalgebra $\mathcal{S}$ of a
qualitative calculus. Furthermore, if $\mathcal{S}$ is a subalgebra of RCC5 or
RCC8 in which weak composition distributes over nonempty intersections, then
$\Gamma$ has a unique prime subnetwork, which can be obtained in cubic time by
removing all redundant constraints simultaneously from $\Gamma$. As a
byproduct, we show that any path-consistent network over such a distributive
subalgebra is weakly globally consistent and minimal. A thorough empirical
analysis of the prime subnetwork upon real geographical data sets demonstrates
the approach is able to identify significantly more redundant constraints than
previously proposed algorithms, especially in constraint networks with larger
proportions of partial overlap relations.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2014 22:01:16 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Feb 2015 15:22:28 GMT"
}
] | 1,487,635,200,000 | [
[
"Li",
"Sanjiang",
""
],
[
"Long",
"Zhiguo",
""
],
[
"Liu",
"Weiming",
""
],
[
"Duckham",
"Matt",
""
],
[
"Both",
"Alan",
""
]
] |
1403.0764 | Kieran Greer Dr | Kieran Greer | Clustering Concept Chains from Ordered Data without Path Descriptions | Pre-print | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a process for clustering concepts into chains from data
presented randomly to an evaluating system. There are a number of rules or
guidelines that help the system to determine more accurately what concepts
belong to a particular chain and what ones do not, but it should be possible to
write these in a generic way. This mechanism also uses a flat structure without
any hierarchical path information, where the link between two concepts is made
at the level of the concept itself. It does not require related metadata, but
instead, a simple counting mechanism is used. Key to this is a count for both
the concept itself and also the group or chain that it belongs to. To test the
possible success of the mechanism, concept chain parts taken randomly from a
larger ontology were presented to the system, but only at a depth of 2 concepts
each time. That is - root concept plus a concept that it is linked to. The
results show that this can still lead to very variable structures being formed
and can also accommodate some level of randomness.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2014 12:37:36 GMT"
}
] | 1,393,977,600,000 | [
[
"Greer",
"Kieran",
""
]
] |
1403.1076 | Kieran Greer Dr | Kieran Greer | Is Intelligence Artificial? | This new version adds some clarity to the discussion. Also the
opportunity to extend or update some sections. Some new references | Euroasia Summit, Congress on Scientific Researches and Recent
Trends-8, August 2-4, 2021, The Philippine Merchant Marine Academy,
Philippines, pp. 307 - 324 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our understanding of intelligence is directed primarily at the human level.
This paper attempts to give a more unifying definition that can be applied to
the natural world in general and then Artificial Intelligence. The definition
would be used more to verify a relative intelligence, not to quantify it and
might help when making judgements on the matter. While correct behaviour is the
preferred definition, a metric that is grounded in Kolmogorov's Complexity
Theory is suggested, which leads to a measurement about entropy. A version of
an accepted AI test is then put forward as the 'acid test' and might be what a
free-thinking program would try to achieve. Recent work by the author has been
more from a direction of mechanical processes, or ones that might operate
automatically. This paper agrees that intelligence is a pro-active event, but
also notes a second aspect to it that is in the background and mechanical. The
paper suggests looking at intelligence and the conscious as being slightly
different, where consciousness is this more mechanical aspect. In fact, a
surprising conclusion can be a passive but intelligent brain being invoked by
active and less intelligent senses.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2014 11:09:55 GMT"
},
{
"version": "v2",
"created": "Sat, 8 Nov 2014 13:20:49 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Nov 2014 16:19:47 GMT"
},
{
"version": "v4",
"created": "Wed, 28 Jan 2015 17:10:05 GMT"
},
{
"version": "v5",
"created": "Mon, 29 Jun 2015 11:49:38 GMT"
},
{
"version": "v6",
"created": "Mon, 18 Jan 2021 10:59:41 GMT"
},
{
"version": "v7",
"created": "Mon, 14 Jun 2021 11:29:11 GMT"
},
{
"version": "v8",
"created": "Thu, 29 Jul 2021 11:44:43 GMT"
}
] | 1,630,281,600,000 | [
[
"Greer",
"Kieran",
""
]
] |
1403.1169 | J. G. Wolff | J Gerard Wolff | A proof challenge: multiple alignment and information compression | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | These notes pose a "proof challenge": a proof, or disproof, of the
proposition that "For any given body of information, I, expressed as a
one-dimensional sequence of atomic symbols, a multiple alignment concept,
described in the document, provides a means of encoding all the redundancy that
may exist in I. Aspects of the challenge are described.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2014 17:00:19 GMT"
}
] | 1,394,064,000,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1403.1497 | Manuel Lopes | Manuel Lopes and Luis Montesano | Active Learning for Autonomous Intelligent Agents: Exploration,
Curiosity, and Interaction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this survey we present different approaches that allow an intelligent
agent to explore autonomous its environment to gather information and learn
multiple tasks. Different communities proposed different solutions, that are in
many cases, similar and/or complementary. These solutions include active
learning, exploration/exploitation, online-learning and social learning. The
common aspect of all these approaches is that it is the agent to selects and
decides what information to gather next. Applications for these approaches
already include tutoring systems, autonomous grasping learning, navigation and
mapping and human-robot interaction. We discuss how these approaches are
related, explaining their similarities and their differences in terms of
problem assumptions and metrics of success. We consider that such an integrated
discussion will improve inter-disciplinary research and applications.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 17:12:30 GMT"
}
] | 1,394,150,400,000 | [
[
"Lopes",
"Manuel",
""
],
[
"Montesano",
"Luis",
""
]
] |
1403.1521 | Karl Wiegand | Ian Helmke, Daniel Kreymer, Karl Wiegand | Approximation Models of Combat in StarCraft 2 | 13 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time strategy (RTS) games make heavy use of artificial intelligence
(AI), especially in the design of computerized opponents. Because of the
computational complexity involved in managing all aspects of these games, many
AI opponents are designed to optimize only a few areas of playing style. In
games like StarCraft 2, a very popular and recently released RTS, most AI
strategies revolve around economic and building efficiency: AI opponents try to
gather and spend all resources as quickly and effectively as possible while
ensuring that no units are idle. The aim of this work was to help address the
need for AI combat strategies that are not computationally intensive. Our goal
was to produce a computationally efficient model that is accurate at predicting
the results of complex battles between diverse armies, including which army
will win and how many units will remain. Our results suggest it may be possible
to develop a relatively simple approximation model of combat that can
accurately predict many battles that do not involve micromanagement. Future
designs of AI opponents may be able to incorporate such an approximation model
into their decision and planning systems to provide a challenge that is
strategically balanced across all aspects of play.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2014 18:26:49 GMT"
}
] | 1,394,150,400,000 | [
[
"Helmke",
"Ian",
""
],
[
"Kreymer",
"Daniel",
""
],
[
"Wiegand",
"Karl",
""
]
] |
1403.2498 | Guoru Ding | Qihui Wu, Guoru Ding (Corresponding author), Yuhua Xu, Shuo Feng,
Zhiyong Du, Jinlong Wang, and Keping Long | Cognitive Internet of Things: A New Paradigm beyond Connection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current research on Internet of Things (IoT) mainly focuses on how to enable
general objects to see, hear, and smell the physical world for themselves, and
make them connected to share the observations. In this paper, we argue that
only connected is not enough, beyond that, general objects should have the
capability to learn, think, and understand both physical and social worlds by
themselves. This practical need impels us to develop a new paradigm, named
Cognitive Internet of Things (CIoT), to empower the current IoT with a `brain'
for high-level intelligence. Specifically, we first present a comprehensive
definition for CIoT, primarily inspired by the effectiveness of human
cognition. Then, we propose an operational framework of CIoT, which mainly
characterizes the interactions among five fundamental cognitive tasks:
perception-action cycle, massive data analytics, semantic derivation and
knowledge discovery, intelligent decision-making, and on-demand service
provisioning. Furthermore, we provide a systematic tutorial on key enabling
techniques involved in the cognitive tasks. In addition, we also discuss the
design of proper performance metrics on evaluating the enabling techniques.
Last but not least, we present the research challenges and open issues ahead.
Building on the present work and potentially fruitful future studies, CIoT has
the capability to bridge the physical world (with objects, resources, etc.) and
the social world (with human demand, social behavior, etc.), and enhance smart
resource allocation, automatic network operation, and intelligent service
provisioning.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2014 08:34:31 GMT"
}
] | 1,394,582,400,000 | [
[
"Wu",
"Qihui",
"",
"Corresponding author"
],
[
"Ding",
"Guoru",
"",
"Corresponding author"
],
[
"Xu",
"Yuhua",
""
],
[
"Feng",
"Shuo",
""
],
[
"Du",
"Zhiyong",
""
],
[
"Wang",
"Jinlong",
""
],
[
"Long",
"Keping",
""
]
] |
1403.2541 | Kieran Greer Dr | Kieran Greer | Turing: Then, Now and Still Key | Published | 'Artificial Intelligence, Evolutionary Computation and
Metaheuristics (AIECM) - Turing 2012', Eds. X-S. Yang, Studies in
Computational Intelligence, 2013, Vol. 427/2013, pp. 43-62, Springer-Verlag
Berlin Heidelberg | 10.1007/978-3-642-29694-9_3 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper looks at Turing's postulations about Artificial Intelligence in
his paper 'Computing Machinery and Intelligence', published in 1950. It notes
how accurate they were and how relevant they still are today. This paper notes
the arguments and mechanisms that he suggested and tries to expand on them
further. The paper however is mostly about describing the essential ingredients
for building an intelligent model and the problems related with that. The
discussion includes recent work by the author himself, who adds his own
thoughts on the matter that come from a purely technical investigation into the
problem. These are personal and quite speculative, but provide an interesting
insight into the mechanisms that might be used for building an intelligent
system.
| [
{
"version": "v1",
"created": "Tue, 11 Mar 2014 11:38:23 GMT"
}
] | 1,394,582,400,000 | [
[
"Greer",
"Kieran",
""
]
] |
1403.3084 | Juan Juli\'an Merelo-Guerv\'os Pr. | R.H. Garc\'ia-Ortega, P. Garc\'ia-S\'anchez and J. J. Merelo | Emerging archetypes in massive artificial societies for literary
purposes using genetic algorithms | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | The creation of fictional stories is a very complex task that usually implies
a creative process where the author has to combine characters, conflicts and
plots to create an engaging narrative. This work presents a simulated
environment with hundreds of characters that allows the study of coherent and
interesting literary archetypes (or behaviours), plots and sub-plots. We will
use this environment to perform a study about the number of profiles
(parameters that define the personality of a character) needed to create two
emergent scenes of archetypes: "natality control" and "revenge". A Genetic
Algorithm (GA) will be used to find the fittest number of profiles and
parameter configuration that enables the existence of the desired archetypes
(played by the characters without their explicit knowledge). The results show
that parametrizing this complex system is possible and that these kind of
archetypes can emerge in the given environment.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2014 18:35:43 GMT"
}
] | 1,394,755,200,000 | [
[
"García-Ortega",
"R. H.",
""
],
[
"García-Sánchez",
"P.",
""
],
[
"Merelo",
"J. J.",
""
]
] |
1403.5142 | Kostyantyn Shchekotykhin | Kostyantyn Shchekotykhin | Interactive Debugging of ASP Programs | Published in Proceedings of the 15th International Workshop on
Non-Monotonic Reasoning (NMR 2014) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Broad application of answer set programming (ASP) for declarative problem
solving requires the development of tools supporting the coding process.
Program debugging is one of the crucial activities within this process.
Recently suggested ASP debugging approaches allow efficient computation of
possible explanations of a fault. However, even for a small program a debugger
might return a large number of possible explanations and selection of the
correct one must be done manually. In this paper we present an interactive
query-based ASP debugging method which extends previous approaches and finds a
preferred explanation by means of observations. The system queries a programmer
whether a set of ground atoms must be true in all (cautiously) or some
(bravely) answer sets of the program. Since some queries can be more
informative than the others, we discuss query selection strategies which, given
user's preferences for an explanation, can find the best query. That is, the
query an answer of which reduces the overall number of queries required for the
identification of a preferred explanation.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2014 14:22:58 GMT"
},
{
"version": "v2",
"created": "Thu, 8 May 2014 06:58:31 GMT"
},
{
"version": "v3",
"created": "Wed, 21 May 2014 19:39:14 GMT"
},
{
"version": "v4",
"created": "Tue, 28 Oct 2014 13:16:04 GMT"
}
] | 1,414,540,800,000 | [
[
"Shchekotykhin",
"Kostyantyn",
""
]
] |
1403.5169 | Xinyang Deng | Yunpeng Li, Ya Li, Jie Liu, Yong Deng | Defuzzify firstly or finally: Dose it matter in fuzzy DEMATEL under
uncertain environment? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-Making Trial and Evaluation Laboratory (DEMATEL) method is widely
used in many real applications. With the desirable property of efficient
handling with the uncertain information in decision making, the fuzzy DEMATEL
is heavily studied. Recently, Dytczak and Ginda suggested to defuzzify the
fuzzy numbers firstly and then use the classical DEMATEL to obtain the final
result. In this short paper, we show that it is not reasonable in some
situations. The results of defuzzification at the first step are not coincide
with the results of defuzzification at the final step.It seems that the
alternative is to defuzzification in the final step in fuzzy DEMATEL.
| [
{
"version": "v1",
"created": "Thu, 20 Mar 2014 15:28:29 GMT"
}
] | 1,395,360,000,000 | [
[
"Li",
"Yunpeng",
""
],
[
"Li",
"Ya",
""
],
[
"Liu",
"Jie",
""
],
[
"Deng",
"Yong",
""
]
] |
1403.5508 | Stefania Costantini | Stefania Costantini | Towards Active Logic Programming | This work was presented at the 2nd International Workshop on
Component-based Software Development in Computational Logic (COCL 1999). In
this paper, the DALI language was first introduced | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the new logic programming language DALI, aimed at
defining agents and agent systems. A main design objective for DALI has been
that of introducing in a declarative fashion all the essential features, while
keeping the language as close as possible to the syntax and semantics of the
plain Horn--clause language. Special atoms and rules have been introduced, for
representing: external events, to which the agent is able to respond
(reactivity); actions (reactivity and proactivity); internal events (previous
conclusions which can trigger further activity); past and present events (to be
aware of what has happened). An extended resolution is provided, so that a DALI
agent is able to answer queries like in the plain Horn--clause language, but is
also able to cope with the different kinds of events, and exhibit a (rational)
reactive and proactive behaviour.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2014 16:22:17 GMT"
}
] | 1,395,619,200,000 | [
[
"Costantini",
"Stefania",
""
]
] |
1403.5701 | Boris Toma\v{s} | Boris Tomas | Cortex simulation system proposal using distributed computer network
environments | 4 pages | IJCSIS Volume 12 No. 3 2014 | null | null | cs.AI | http://creativecommons.org/licenses/publicdomain/ | In the dawn of computer science and the eve of neuroscience we participate in
rebirth of neuroscience due to new technology that allows us to deeply and
precisely explore whole new world that dwells in our brains.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2014 20:30:55 GMT"
}
] | 1,395,705,600,000 | [
[
"Tomas",
"Boris",
""
]
] |
1403.5753 | Xinyang Deng | Xinyang Deng, Felix T.S. Chan, Rehan Sadiq, Sankaran Mahadevan, Yong
Deng | D-CFPR: D numbers extended consistent fuzzy preference relations | 28 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to express an expert's or a decision maker's preference for alternatives
is an open issue. Consistent fuzzy preference relation (CFPR) is with big
advantages to handle this problem due to it can be construed via a smaller
number of pairwise comparisons and satisfies additive transitivity property.
However, the CFPR is incapable of dealing with the cases involving uncertain
and incomplete information. In this paper, a D numbers extended consistent
fuzzy preference relation (D-CFPR) is proposed to overcome the weakness. The
D-CFPR extends the classical CFPR by using a new model of expressing uncertain
information called D numbers. The D-CFPR inherits the merits of classical CFPR
and can be totally reduced to the classical CFPR. This study can be integrated
into our previous study about D-AHP (D numbers extended AHP) model to provide a
systematic solution for multi-criteria decision making (MCDM).
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2014 14:09:08 GMT"
}
] | 1,395,705,600,000 | [
[
"Deng",
"Xinyang",
""
],
[
"Chan",
"Felix T. S.",
""
],
[
"Sadiq",
"Rehan",
""
],
[
"Mahadevan",
"Sankaran",
""
],
[
"Deng",
"Yong",
""
]
] |
1403.6036 | C R Ramakrishnan | Arun Nampally, C. R. Ramakrishnan | Adaptive MCMC-Based Inference in Probabilistic Logic Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic Logic Programming (PLP) languages enable programmers to specify
systems that combine logical models with statistical knowledge. The inference
problem, to determine the probability of query answers in PLP, is intractable
in general, thereby motivating the need for approximate techniques. In this
paper, we present a technique for approximate inference of conditional
probabilities for PLP queries. It is an Adaptive Markov Chain Monte Carlo
(MCMC) technique, where the distribution from which samples are drawn is
modified as the Markov Chain is explored. In particular, the distribution is
progressively modified to increase the likelihood that a generated sample is
consistent with evidence. In our context, each sample is uniquely characterized
by the outcomes of a set of random variables. Inspired by reinforcement
learning, our technique propagates rewards to random variable/outcome pairs
used in a sample based on whether the sample was consistent or not. The
cumulative rewards of each outcome is used to derive a new "adapted
distribution" for each random variable. For a sequence of samples, the
distributions are progressively adapted after each sample. For a query with
"Markovian evaluation structure", we show that the adapted distribution of
samples converges to the query's conditional probability distribution. For
Markovian queries, we present a modified adaptation process that can be used in
adaptive MCMC as well as adaptive independent sampling. We empirically evaluate
the effectiveness of the adaptive sampling methods for queries with and without
Markovian evaluation structure.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2014 16:51:06 GMT"
}
] | 1,395,705,600,000 | [
[
"Nampally",
"Arun",
""
],
[
"Ramakrishnan",
"C. R.",
""
]
] |
1403.7292 | Arti Gupta Shambhuprasad | Arti Gupta, Prof. N.T Deotale | A Mining Method to Create Knowledge Map by Analysing the Data Resource | 6 pages,5 figures, Published with International Journal of
Engineering Trends and Technology (IJETT)" | International Journal of Engineering Trends and
Technology(IJETT),V9(9),430-435 March 2014 | 10.14445/22315381/IJETT-V9P282 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fundamental step in measuring the robustness of a system is the synthesis
of the so called Process Map.This is generally based on the user raw data
material.Process Maps are of fundamental importance towards the understanding
of the nature of a system in that they indicate which variables are causally
related and which are particularly important.This paper represent the system
Map or business structure map to understand business criteria studying the
various aspects of the company.The business structure map or knowledge map or
Process map are used to increase the growth of the company by giving some
useful measures according to the business criteria.This paper also deals with
the different company strategy to reduce the risk factors.Process Map is
helpful for building such knowledge successfully.Making decisions from such map
in a highly complex situation requires more knowledge and resources.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 07:35:56 GMT"
}
] | 1,396,224,000,000 | [
[
"Gupta",
"Arti",
""
],
[
"Deotale",
"Prof. N. T",
""
]
] |
1403.7373 | Radek Pel\'anek | Radek Pel\'anek | Difficulty Rating of Sudoku Puzzles: An Overview and Evaluation | 24 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How can we predict the difficulty of a Sudoku puzzle? We give an overview of
difficulty rating metrics and evaluate them on extensive dataset on human
problem solving (more then 1700 Sudoku puzzles, hundreds of solvers). The best
results are obtained using a computational model of human solving activity.
Using the model we show that there are two sources of the problem difficulty:
complexity of individual steps (logic operations) and structure of dependency
among steps. We also describe metrics based on analysis of solutions under
relaxed constraints -- a novel approach inspired by phase transition phenomenon
in the graph coloring problem. In our discussion we focus not just on the
performance of individual metrics on the Sudoku puzzle, but also on their
generalizability and applicability to other problems.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 13:43:50 GMT"
}
] | 1,396,224,000,000 | [
[
"Pelánek",
"Radek",
""
]
] |
1403.7426 | Ilche Georgievski | Ilche Georgievski and Marco Aiello | An Overview of Hierarchical Task Network Planning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchies are the most common structure used to understand the world
better. In galaxies, for instance, multiple-star systems are organised in a
hierarchical system. Then, governmental and company organisations are
structured using a hierarchy, while the Internet, which is used on a daily
basis, has a space of domain names arranged hierarchically. Since Artificial
Intelligence (AI) planning portrays information about the world and reasons to
solve some of world's problems, Hierarchical Task Network (HTN) planning has
been introduced almost 40 years ago to represent and deal with hierarchies. Its
requirement for rich domain knowledge to characterise the world enables HTN
planning to be very useful, but also to perform well. However, the history of
almost 40 years obfuscates the current understanding of HTN planning in terms
of accomplishments, planning models, similarities and differences among
hierarchical planners, and its current and objective image. On top of these
issues, attention attracts the ability of hierarchical planning to truly cope
with the requirements of applications from the real world. We propose a
framework-based approach to remedy this situation. First, we provide a basis
for defining different formal models of hierarchical planning, and define two
models that comprise a large portion of HTN planners. Second, we provide a set
of concepts that helps to interpret HTN planners from the aspect of their
search space. Then, we analyse and compare the planners based on a variety of
properties organised in five segments, namely domain authoring, expressiveness,
competence, performance and applicability. Furthermore, we select Web service
composition as a real-world and current application, and classify and compare
the approaches that employ HTN planning to solve the problem of service
composition. Finally, we conclude with our findings and present directions for
future work.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 16:01:51 GMT"
}
] | 1,396,224,000,000 | [
[
"Georgievski",
"Ilche",
""
],
[
"Aiello",
"Marco",
""
]
] |
1403.7465 | Nisheeth Joshi | Iti Mathur, Nisheeth Joshi, Hemant Darbari and Ajai Kumar | Shiva: A Framework for Graph Based Ontology Matching | null | International Journal of Computer Applications 89(11):30-34, March
2014 | 10.5120/15678-4435 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since long, corporations are looking for knowledge sources which can provide
structured description of data and can focus on meaning and shared
understanding. Structures which can facilitate open world assumptions and can
be flexible enough to incorporate and recognize more than one name for an
entity. A source whose major purpose is to facilitate human communication and
interoperability. Clearly, databases fail to provide these features and
ontologies have emerged as an alternative choice, but corporations working on
same domain tend to make different ontologies. The problem occurs when they
want to share their data/knowledge. Thus we need tools to merge ontologies into
one. This task is termed as ontology matching. This is an emerging area and
still we have to go a long way in having an ideal matcher which can produce
good results. In this paper we have shown a framework to matching ontologies
using graphs.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2014 18:00:13 GMT"
}
] | 1,396,224,000,000 | [
[
"Mathur",
"Iti",
""
],
[
"Joshi",
"Nisheeth",
""
],
[
"Darbari",
"Hemant",
""
],
[
"Kumar",
"Ajai",
""
]
] |
1404.0640 | Akin Osman Kazakci | Akin Osman Kazakci (CGS) | Conceptive Artificial Intelligence: Insights from design theory | null | International Design Conference DESIGN2014, Croatia (2014) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current paper offers a perspective on what we term conceptive
intelligence - the capacity of an agent to continuously think of new object
definitions (tasks, problems, physical systems, etc.) and to look for methods
to realize them. The framework, called a Brouwer machine, is inspired by
previous research in design theory and modeling, with its roots in the
constructivist mathematics of intuitionism. The dual constructivist perspective
we describe offers the possibility to create novelty both in terms of the types
of objects and the methods for constructing objects. More generally, the
theoretical work on which Brouwer machines are based is called imaginative
constructivism. Based on the framework and the theory, we discuss many
paradigms and techniques omnipresent in AI research and their merits and
shortcomings for modeling aspects of design, as described by imaginative
constructivism. To demonstrate and explain the type of creative process
expressed by the notion of a Brouwer machine, we compare this concept with a
system using genetic algorithms for scientific law discovery.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2014 18:06:40 GMT"
}
] | 1,396,483,200,000 | [
[
"Kazakci",
"Akin Osman",
"",
"CGS"
]
] |
1404.1511 | Aske Plaat | Aske Plaat | MTD(f), A Minimax Algorithm Faster Than NegaScout | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | MTD(f) is a new minimax search algorithm, simpler and more efficient than
previous algorithms. In tests with a number of tournament game playing programs
for chess, checkers and Othello it performed better, on average, than
NegaScout/PVS (the AlphaBeta variant used in practically all good chess,
checkers, and Othello programs). One of the strongest chess programs of the
moment, MIT's parallel chess program Cilkchess uses MTD(f) as its search
algorithm, replacing NegaScout, which was used in StarSocrates, the previous
version of the program.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2014 19:51:05 GMT"
}
] | 1,396,915,200,000 | [
[
"Plaat",
"Aske",
""
]
] |
1404.1515 | Aske Plaat | Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin | A New Paradigm for Minimax Search | Novag Award 1994-1995 Best Computer Chess publication | null | null | Univ Alberta TR 94-18 | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This paper introduces a new paradigm for minimax game-tree search algo-
rithms. MT is a memory-enhanced version of Pearls Test procedure. By changing
the way MT is called, a number of best-first game-tree search algorithms can be
simply and elegantly constructed (including SSS*). Most of the assessments of
minimax search algorithms have been based on simulations. However, these
simulations generally do not address two of the key ingredients of high
performance game-playing programs: iterative deepening and memory usage. This
paper presents experimental data from three game-playing programs (checkers,
Othello and chess), covering the range from low to high branching factor. The
improved move ordering due to iterative deepening and memory usage results in
significantly different results from those portrayed in the literature. Whereas
some simulations show Alpha-Beta expanding almost 100% more leaf nodes than
other algorithms [12], our results showed variations of less than 20%. One new
instance of our framework (MTD-f) out-performs our best alpha- beta searcher
(aspiration NegaScout) on leaf nodes, total nodes and execution time. To our
knowledge, these are the first reported results that compare both depth-first
and best-first algorithms given the same amount of memory
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2014 20:05:31 GMT"
}
] | 1,396,915,200,000 | [
[
"Plaat",
"Aske",
""
],
[
"Schaeffer",
"Jonathan",
""
],
[
"Pijls",
"Wim",
""
],
[
"de Bruin",
"Arie",
""
]
] |
1404.1517 | Aske Plaat | Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin | SSS* = Alpha-Beta + TT | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In 1979 Stockman introduced the SSS* minimax search algorithm that domi-
nates Alpha-Beta in the number of leaf nodes expanded. Further investigation of
the algorithm showed that it had three serious drawbacks, which prevented its
use by practitioners: it is difficult to understand, it has large memory
requirements, and it is slow. This paper presents an alternate formulation of
SSS*, in which it is implemented as a series of Alpha-Beta calls that use a
transposition table (AB- SSS*). The reformulation solves all three perceived
drawbacks of SSS*, making it a practical algorithm. Further, because the search
is now based on Alpha-Beta, the extensive research on minimax search
enhancements can be easily integrated into AB-SSS*. To test AB-SSS* in
practise, it has been implemented in three state-of-the- art programs: for
checkers, Othello and chess. AB-SSS* is comparable in performance to Alpha-Beta
on leaf node count in all three games, making it a viable alternative to
Alpha-Beta in practise. Whereas SSS* has usually been regarded as being
entirely different from Alpha-Beta, it turns out to be just an Alpha-Beta
enhancement, like null-window searching. This runs counter to published
simulation results. Our research leads to the surprising result that iterative
deepening versions of Alpha-Beta can expand fewer leaf nodes than iterative
deepening versions of SSS* due to dynamic move re-ordering.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2014 20:09:58 GMT"
}
] | 1,396,915,200,000 | [
[
"Plaat",
"Aske",
""
],
[
"Schaeffer",
"Jonathan",
""
],
[
"Pijls",
"Wim",
""
],
[
"de Bruin",
"Arie",
""
]
] |
1404.1518 | Aske Plaat | Aske Plaat, Jonathan Schaeffer, Wim Pijls, Arie de Bruin | Nearly Optimal Minimax Tree Search? | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Knuth and Moore presented a theoretical lower bound on the number of leaves
that any fixed-depth minimax tree-search algorithm traversing a uniform tree
must explore, the so-called minimal tree. Since real-life minimax trees are not
uniform, the exact size of this tree is not known for most applications.
Further, most games have transpositions, implying that there exists a minimal
graph which is smaller than the minimal tree. For three games (chess, Othello
and checkers) we compute the size of the minimal tree and the minimal graph.
Empirical evidence shows that in all three games, enhanced Alpha-Beta search is
capable of building a tree that is close in size to that of the minimal graph.
Hence, it appears game-playing programs build nearly optimal search trees.
However, the conventional definition of the minimal graph is wrong. There are
ways in which the size of the minimal graph can be reduced: by maximizing the
number of transpositions in the search, and generating cutoffs using branches
that lead to smaller search trees. The conventional definition of the minimal
graph is just a left-most approximation. Calculating the size of the real
minimal graph is too computationally intensive. However, upper bound
approximations show it to be significantly smaller than the left-most minimal
graph. Hence, it appears that game-playing programs are not searching as
efficiently as is widely believed. Understanding the left-most and real minimal
search graphs leads to some new ideas for enhancing Alpha-Beta search. One of
them, enhanced transposition cutoffs, is shown to significantly reduce search
tree size.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2014 20:13:58 GMT"
}
] | 1,396,915,200,000 | [
[
"Plaat",
"Aske",
""
],
[
"Schaeffer",
"Jonathan",
""
],
[
"Pijls",
"Wim",
""
],
[
"de Bruin",
"Arie",
""
]
] |
1404.1718 | Gabriel Leuenberger | Gabriel Leuenberger | Applications of Algorithmic Probability to the Philosophy of Mind | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents formulae that can solve various seemingly hopeless
philosophical conundrums. We discuss the simulation argument, teleportation,
mind-uploading, the rationality of utilitarianism, and the ethics of exploiting
artificial general intelligence. Our approach arises from combining the
essential ideas of formalisms such as algorithmic probability, the universal
intelligence measure, space-time-embedded intelligence, and Hutter's observer
localization. We argue that such universal models can yield the ultimate
solutions, but a novel research direction would be required in order to find
computationally efficient approximations thereof.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2014 10:02:47 GMT"
},
{
"version": "v2",
"created": "Thu, 8 May 2014 05:31:18 GMT"
},
{
"version": "v3",
"created": "Tue, 20 May 2014 19:20:38 GMT"
},
{
"version": "v4",
"created": "Sun, 27 Jul 2014 01:49:51 GMT"
},
{
"version": "v5",
"created": "Fri, 31 Oct 2014 04:37:52 GMT"
},
{
"version": "v6",
"created": "Sun, 27 Mar 2016 13:07:44 GMT"
},
{
"version": "v7",
"created": "Mon, 18 Apr 2016 16:55:19 GMT"
},
{
"version": "v8",
"created": "Thu, 5 Jan 2017 16:51:55 GMT"
}
] | 1,483,660,800,000 | [
[
"Leuenberger",
"Gabriel",
""
]
] |
1404.1812 | Anugrah Kumar | Anugrah Kumar | Determining the Consistency factor of Autopilot using Rough Set Theory | IEEE International Conference on Networking, Sensing and Control 2014 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autopilot is a system designed to guide a vehicle without aid. Due to
increase in flight hours and complexity of modern day flight it has become
imperative to equip the aircrafts with autopilot. Thus reliability and
consistency of an Autopilot system becomes a crucial role in a flight. But the
increased complexity and demand for better accuracy has made the process of
evaluating the autopilot for consistency a difficult process .A vast amount of
imprecise data has been involved. Rough sets can be a potent tool for such kind
of Applications containing vague data. This paper proposes an approach towards
Consistency factor determination using Rough Set Theory. The seventeen basic
factors, that are crucial in determining the consistency of an Autopilot
system, are grouped into five Payloads based on their functionality.
Consistency Factor is evaluated through these payloads, using Rough Set Theory.
Consistency Factor determines the consistency and reliability of an autopilot
system and the conditions under which manual override becomes imperative. Using
Rough set Theory the most and the least influential factors towards Autopilot
system are also determined.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2014 15:08:09 GMT"
}
] | 1,396,915,200,000 | [
[
"Kumar",
"Anugrah",
""
]
] |
1404.1884 | Guoming Tang | Guoming Tang, Kui Wu, Jingsheng Lei, and Jiuyang Tang | Plug and Play! A Simple, Universal Model for Energy Disaggregation | 12 pages, 5 figures, and 4 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Energy disaggregation is to discover the energy consumption of individual
appliances from their aggregated energy values. To solve the problem, most
existing approaches rely on either appliances' signatures or their state
transition patterns, both hard to obtain in practice. Aiming at developing a
simple, universal model that works without depending on sophisticated machine
learning techniques or auxiliary equipments, we make use of easily accessible
knowledge of appliances and the sparsity of the switching events to design a
Sparse Switching Event Recovering (SSER) method. By minimizing the total
variation (TV) of the (sparse) event matrix, SSER can effectively recover the
individual energy consumption values from the aggregated ones. To speed up the
process, a Parallel Local Optimization Algorithm (PLOA) is proposed to solve
the problem in active epochs of appliance activities in parallel. Using
real-world trace data, we compare the performance of our method with that of
the state-of-the-art solutions, including Least Square Estimation (LSE) and
iterative Hidden Markov Model (HMM). The results show that our approach has an
overall higher detection accuracy and a smaller overhead.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2014 19:02:30 GMT"
}
] | 1,396,915,200,000 | [
[
"Tang",
"Guoming",
""
],
[
"Wu",
"Kui",
""
],
[
"Lei",
"Jingsheng",
""
],
[
"Tang",
"Jiuyang",
""
]
] |
1404.2116 | Tshilidzi Marwala | Tshilidzi Marwala | Rational Counterfactuals | To appear in Artificial Intelligence for Rational Decision Making
(Springer-Verlag) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces the concept of rational countefactuals which is an idea
of identifying a counterfactual from the factual (whether perceived or real)
that maximizes the attainment of the desired consequent. In counterfactual
thinking if we have a factual statement like: Saddam Hussein invaded Kuwait and
consequently George Bush declared war on Iraq then its counterfactuals is: If
Saddam Hussein did not invade Kuwait then George Bush would not have declared
war on Iraq. The theory of rational counterfactuals is applied to identify the
antecedent that gives the desired consequent necessary for rational decision
making. The rational countefactual theory is applied to identify the values of
variables Allies, Contingency, Distance, Major Power, Capability, Democracy, as
well as Economic Interdependency that gives the desired consequent Peace.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2014 13:15:06 GMT"
}
] | 1,397,001,600,000 | [
[
"Marwala",
"Tshilidzi",
""
]
] |
1404.2162 | Georg Kaes | Georg Kaes, J\"urgen Manger, Stefanie Rinderle-Ma, Ralph Vigne | The NNN Formalization: Review and Development of Guideline Specification
in the Care Domain | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | Due to an ageing society, it can be expected that less nursing personnel will
be responsible for an increasing number of patients in the future. One way to
address this challenge is to provide system-based support for nursing personnel
in creating, executing, and adapting patient care processes. In care practice,
these processes are following the general care process definition and
individually specified according to patient-specific data as well as diagnoses
and guidelines from the NANDA, NIC, and NOC (NNN) standards. In addition,
adaptations to running patient processes become necessary frequently and are to
be conducted by nursing personnel including NNN knowledge. In order to provide
semi-automatic support for design and adaption of care processes, a
formalization of NNN knowledge is indispensable. This technical report presents
the NNN formalization that is developed targeting at goals such as
completeness, flexibility, and later exploitation for creating and adapting
patient care processes. The formalization also takes into consideration an
extensive evaluation of existing formalization standards for clinical
guidelines. The NNN formalization as well as its usage are evaluated based on
case study FATIGUE.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2014 14:50:53 GMT"
}
] | 1,397,001,600,000 | [
[
"Kaes",
"Georg",
""
],
[
"Manger",
"Jürgen",
""
],
[
"Rinderle-Ma",
"Stefanie",
""
],
[
"Vigne",
"Ralph",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.