id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1107.0044 | P. Beame | P. Beame, H. Kautz, A. Sabharwal | Towards Understanding and Harnessing the Potential of Clause Learning | null | Journal Of Artificial Intelligence Research, Volume 22, pages
319-351, 2004 | 10.1613/jair.1410 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient implementations of DPLL with the addition of clause learning are
the fastest complete Boolean satisfiability solvers and can handle many
significant real-world problems, such as verification, planning and design.
Despite its importance, little is known of the ultimate strengths and
limitations of the technique. This paper presents the first precise
characterization of clause learning as a proof system (CL), and begins the task
of understanding its power by relating it to the well-studied resolution proof
system. In particular, we show that with a new learning scheme, CL can provide
exponentially shorter proofs than many proper refinements of general resolution
(RES) satisfying a natural property. These include regular and Davis-Putnam
resolution, which are already known to be much stronger than ordinary DPLL. We
also show that a slight variant of CL with unlimited restarts is as powerful as
RES itself. Translating these analytical results to practice, however, presents
a challenge because of the nondeterministic nature of clause learning
algorithms. We propose a novel way of exploiting the underlying problem
structure, in the form of a high level problem description such as a graph or
PDDL specification, to guide clause learning algorithms toward faster
solutions. We show that this leads to exponential speed-ups on grid and
randomized pebbling problems, as well as substantial improvements on certain
ordering formulas.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:39:28 GMT"
}
] | 1,309,737,600,000 | [
[
"Beame",
"P.",
""
],
[
"Kautz",
"H.",
""
],
[
"Sabharwal",
"A.",
""
]
] |
1107.0045 | C. Cayrol | C. Cayrol, M. C. Lagasquie-Schiex | Graduality in Argumentation | null | Journal Of Artificial Intelligence Research, Volume 23, pages
245-297, 2005 | 10.1613/jair.1411 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation is based on the exchange and valuation of interacting
arguments, followed by the selection of the most acceptable of them (for
example, in order to take a decision, to make a choice). Starting from the
framework proposed by Dung in 1995, our purpose is to introduce 'graduality' in
the selection of the best arguments, i.e., to be able to partition the set of
the arguments in more than the two usual subsets of 'selected' and
'non-selected' arguments in order to represent different levels of selection.
Our basic idea is that an argument is all the more acceptable if it can be
preferred to its attackers. First, we discuss general principles underlying a
'gradual' valuation of arguments based on their interactions. Following these
principles, we define several valuation models for an abstract argumentation
system. Then, we introduce 'graduality' in the concept of acceptability of
arguments. We propose new acceptability classes and a refinement of existing
classes taking advantage of an available 'gradual' valuation.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:39:39 GMT"
}
] | 1,309,737,600,000 | [
[
"Cayrol",
"C.",
""
],
[
"Lagasquie-Schiex",
"M. C.",
""
]
] |
1107.0046 | P. Derbeko | P. Derbeko, R. El-Yaniv, R. Meir | Explicit Learning Curves for Transduction and Application to Clustering
and Compression Algorithms | null | Journal Of Artificial Intelligence Research, Volume 22, pages
117-142, 2004 | 10.1613/jair.1417 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inductive learning is based on inferring a general rule from a finite data
set and using it to label new data. In transduction one attempts to solve the
problem of using a labeled training set to label a set of unlabeled points,
which are given to the learner prior to learning. Although transduction seems
at the outset to be an easier task than induction, there have not been many
provably useful algorithms for transduction. Moreover, the precise relation
between induction and transduction has not yet been determined. The main
theoretical developments related to transduction were presented by Vapnik more
than twenty years ago. One of Vapnik's basic results is a rather tight error
bound for transductive classification based on an exact computation of the
hypergeometric tail. While tight, this bound is given implicitly via a
computational routine. Our first contribution is a somewhat looser but explicit
characterization of a slightly extended PAC-Bayesian version of Vapnik's
transductive bound. This characterization is obtained using concentration
inequalities for the tail of sums of random variables obtained by sampling
without replacement. We then derive error bounds for compression schemes such
as (transductive) support vector machines and for transduction algorithms based
on clustering. The main observation used for deriving these new error bounds
and algorithms is that the unlabeled test points, which in the transductive
setting are known in advance, can be used in order to construct useful data
dependent prior distributions over the hypothesis space.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:39:52 GMT"
}
] | 1,309,737,600,000 | [
[
"Derbeko",
"P.",
""
],
[
"El-Yaniv",
"R.",
""
],
[
"Meir",
"R.",
""
]
] |
1107.0047 | C. V. Goldman | C. V. Goldman, S. Zilberstein | Decentralized Control of Cooperative Systems: Categorization and
Complexity Analysis | null | Journal Of Artificial Intelligence Research, Volume 22, pages
143-174, 2004 | 10.1613/jair.1427 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decentralized control of cooperative systems captures the operation of a
group of decision makers that share a single global objective. The difficulty
in solving optimally such problems arises when the agents lack full
observability of the global state of the system when they operate. The general
problem has been shown to be NEXP-complete. In this paper, we identify classes
of decentralized control problems whose complexity ranges between NEXP and P.
In particular, we study problems characterized by independent transitions,
independent observations, and goal-oriented objective functions. Two algorithms
are shown to solve optimally useful classes of goal-oriented decentralized
processes in polynomial time. This paper also studies information sharing among
the decision-makers, which can improve their performance. We distinguish
between three ways in which agents can exchange information: indirect
communication, direct communication and sharing state features that are not
controlled by the agents. Our analysis shows that for every class of problems
we consider, introducing direct or indirect communication does not change the
worst-case complexity. The results provide a better understanding of the
complexity of decentralized control problems that arise in practice and
facilitate the development of planning algorithms for these problems.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:40:04 GMT"
}
] | 1,309,737,600,000 | [
[
"Goldman",
"C. V.",
""
],
[
"Zilberstein",
"S.",
""
]
] |
1107.0048 | E. Celaya | E. Celaya, J. M. Porta | Reinforcement Learning for Agents with Many Sensors and Actuators Acting
in Categorizable Environments | null | Journal Of Artificial Intelligence Research, Volume 23, pages
79-122, 2005 | 10.1613/jair.1437 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we confront the problem of applying reinforcement learning to
agents that perceive the environment through many sensors and that can perform
parallel actions using many actuators as is the case in complex autonomous
robots. We argue that reinforcement learning can only be successfully applied
to this case if strong assumptions are made on the characteristics of the
environment in which the learning is performed, so that the relevant sensor
readings and motor commands can be readily identified. The introduction of such
assumptions leads to strongly-biased learning systems that can eventually lose
the generality of traditional reinforcement-learning algorithms. In this line,
we observe that, in realistic situations, the reward received by the robot
depends only on a reduced subset of all the executed actions and that only a
reduced subset of the sensor inputs (possibly different in each situation and
for each action) are relevant to predict the reward. We formalize this property
in the so called 'categorizability assumption' and we present an algorithm that
takes advantage of the categorizability of the environment, allowing a decrease
in the learning time with respect to existing reinforcement-learning
algorithms. Results of the application of the algorithm to a couple of
simulated realistic-robotic problems (landmark-based navigation and the
six-legged robot gait generation) are reported to validate our approach and to
compare it to existing flat and generalization-based reinforcement-learning
approaches.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:40:15 GMT"
}
] | 1,309,737,600,000 | [
[
"Celaya",
"E.",
""
],
[
"Porta",
"J. M.",
""
]
] |
1107.0050 | A. Felner | A. Felner, S. Hanan, R. E. Korf | Additive Pattern Database Heuristics | null | Journal Of Artificial Intelligence Research, Volume 22, pages
279-318, 2004 | 10.1613/jair.1480 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore a method for computing admissible heuristic evaluation functions
for search problems. It utilizes pattern databases, which are precomputed
tables of the exact cost of solving various subproblems of an existing problem.
Unlike standard pattern database heuristics, however, we partition our problems
into disjoint subproblems, so that the costs of solving the different
subproblems can be added together without overestimating the cost of solving
the original problem. Previously, we showed how to statically partition the
sliding-tile puzzles into disjoint groups of tiles to compute an admissible
heuristic, using the same partition for each state and problem instance. Here
we extend the method and show that it applies to other domains as well. We also
present another method for additive heuristics which we call dynamically
partitioned pattern databases. Here we partition the problem into disjoint
subproblems for each state of the search dynamically. We discuss the pros and
cons of each of these methods and apply both methods to three different problem
domains: the sliding-tile puzzles, the 4-peg Towers of Hanoi problem, and
finding an optimal vertex cover of a graph. We find that in some problem
domains, static partitioning is most effective, while in others dynamic
partitioning is a better choice. In each of these problem domains, either
statically partitioned or dynamically partitioned pattern database heuristics
are the best known heuristics for the problem.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:41:12 GMT"
}
] | 1,309,737,600,000 | [
[
"Felner",
"A.",
""
],
[
"Hanan",
"S.",
""
],
[
"Korf",
"R. E.",
""
]
] |
1107.0051 | R. Begleiter | R. Begleiter, R. El-Yaniv, G. Yona | On Prediction Using Variable Order Markov Models | null | Journal Of Artificial Intelligence Research, Volume 22, pages
385-421, 2004 | 10.1613/jair.1491 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with algorithms for prediction of discrete sequences
over a finite alphabet, using variable order Markov models. The class of such
algorithms is large and in principle includes any lossless compression
algorithm. We focus on six prominent prediction algorithms, including Context
Tree Weighting (CTW), Prediction by Partial Match (PPM) and Probabilistic
Suffix Trees (PSTs). We discuss the properties of these algorithms and compare
their performance using real life sequences from three domains: proteins,
English text and music pieces. The comparison is made with respect to
prediction quality as measured by the average log-loss. We also compare
classification algorithms based on these predictors with respect to a number of
large protein classification tasks. Our results indicate that a "decomposed"
CTW (a variant of the CTW algorithm) and PPM outperform all other algorithms in
sequence prediction tasks. Somewhat surprisingly, a different algorithm, which
is a modification of the Lempel-Ziv compression algorithm, significantly
outperforms all algorithms on the protein classification problems.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:43:01 GMT"
}
] | 1,309,737,600,000 | [
[
"Begleiter",
"R.",
""
],
[
"El-Yaniv",
"R.",
""
],
[
"Yona",
"G.",
""
]
] |
1107.0052 | J. Hoffmann | J. Hoffmann, J. Porteous, L. Sebastia | Ordered Landmarks in Planning | null | Journal Of Artificial Intelligence Research, Volume 22, pages
215-278, 2004 | 10.1613/jair.1492 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many known planning tasks have inherent constraints concerning the best order
in which to achieve the goals. A number of research efforts have been made to
detect such constraints and to use them for guiding search, in the hope of
speeding up the planning process. We go beyond the previous approaches by
considering ordering constraints not only over the (top-level) goals, but also
over the sub-goals that will necessarily arise during planning. Landmarks are
facts that must be true at some point in every valid solution plan. We extend
Koehler and Hoffmann's definition of reasonable orders between top level goals
to the more general case of landmarks. We show how landmarks can be found, how
their reasonable orders can be approximated, and how this information can be
used to decompose a given planning task into several smaller sub-tasks. Our
methodology is completely domain- and planner-independent. The implementation
demonstrates that the approach can yield significant runtime performance
improvements when used as a control loop around state-of-the-art sub-optimal
planning systems, as exemplified by FF and LPG.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:43:14 GMT"
}
] | 1,309,737,600,000 | [
[
"Hoffmann",
"J.",
""
],
[
"Porteous",
"J.",
""
],
[
"Sebastia",
"L.",
""
]
] |
1107.0053 | Daniel Bryce | N. Roy, G. Gordon, S. Thrun | Finding Approximate POMDP solutions Through Belief Compression | null | Journal Of Artificial Intelligence Research, Volume 23, pages
1-40, 2005 | 10.1613/jair.1496 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard value function approaches to finding policies for Partially
Observable Markov Decision Processes (POMDPs) are generally considered to be
intractable for large models. The intractability of these algorithms is to a
large extent a consequence of computing an exact, optimal policy over the
entire belief space. However, in real-world POMDP problems, computing the
optimal policy for the full belief space is often unnecessary for good control
even for problems with complicated policy classes. The beliefs experienced by
the controller often lie near a structured, low-dimensional subspace embedded
in the high-dimensional belief space. Finding a good approximation to the
optimal value function for only this subspace can be much easier than computing
the full value function. We introduce a new method for solving large-scale
POMDPs by reducing the dimensionality of the belief space. We use Exponential
family Principal Components Analysis (Collins, Dasgupta and Schapire, 2002) to
represent sparse, high-dimensional belief spaces using small sets of learned
features of the belief state. We then plan only in terms of the low-dimensional
belief features. By planning in this low-dimensional space, we can find
policies for POMDP models that are orders of magnitude larger than models that
can be handled by conventional techniques. We demonstrate the use of this
algorithm on a synthetic problem and on mobile robot navigation tasks.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:44:33 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Oct 2011 15:16:13 GMT"
}
] | 1,317,772,800,000 | [
[
"Roy",
"N.",
""
],
[
"Gordon",
"G.",
""
],
[
"Thrun",
"S.",
""
]
] |
1107.0054 | W. P. Birmingham | W. P. Birmingham, C. J. Meek | A Comprehensive Trainable Error Model for Sung Music Queries | null | Journal Of Artificial Intelligence Research, Volume 22, pages
57-91, 2004 | 10.1613/jair.1334 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a model for errors in sung queries, a variant of the hidden Markov
model (HMM). This is a solution to the problem of identifying the degree of
similarity between a (typically error-laden) sung query and a potential target
in a database of musical works, an important problem in the field of music
information retrieval. Similarity metrics are a critical component of
query-by-humming (QBH) applications which search audio and multimedia databases
for strong matches to oral queries. Our model comprehensively expresses the
types of error or variation between target and query: cumulative and
non-cumulative local errors, transposition, tempo and tempo changes,
insertions, deletions and modulation. The model is not only expressive, but
automatically trainable, or able to learn and generalize from query examples.
We present results of simulations, designed to assess the discriminatory
potential of the model, and tests with real sung queries, to demonstrate
relevance to real-world applications.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:44:46 GMT"
}
] | 1,309,737,600,000 | [
[
"Birmingham",
"W. P.",
""
],
[
"Meek",
"C. J.",
""
]
] |
1107.0055 | W. Zhang | W. Zhang | Phase Transitions and Backbones of the Asymmetric Traveling Salesman
Problem | null | Journal Of Artificial Intelligence Research, Volume 21, pages
471-497, 2004 | 10.1613/jair.1389 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been much interest in phase transitions of
combinatorial problems. Phase transitions have been successfully used to
analyze combinatorial optimization problems, characterize their typical-case
features and locate the hardest problem instances. In this paper, we study
phase transitions of the asymmetric Traveling Salesman Problem (ATSP), an
NP-hard combinatorial optimization problem that has many real-world
applications. Using random instances of up to 1,500 cities in which intercity
distances are uniformly distributed, we empirically show that many properties
of the problem, including the optimal tour cost and backbone size, experience
sharp transitions as the precision of intercity distances increases across a
critical value. Our experimental results on the costs of the ATSP tours and
assignment problem agree with the theoretical result that the asymptotic cost
of assignment problem is pi ^2 /6 the number of cities goes to infinity. In
addition, we show that the average computational cost of the well-known
branch-and-bound subtour elimination algorithm for the problem also exhibits a
thrashing behavior, transitioning from easy to difficult as the distance
precision increases. These results answer positively an open question regarding
the existence of phase transitions in the ATSP, and provide guidance on how
difficult ATSP problem instances should be generated.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 20:45:03 GMT"
}
] | 1,309,737,600,000 | [
[
"Zhang",
"W.",
""
]
] |
1107.0134 | Vladimir Kurbalija | Vladimir Kurbalija, Milo\v{s} Radovanovi\'c, Zoltan Geler, Mirjana
Ivanovi\'c | The Influence of Global Constraints on Similarity Measures for
Time-Series Databases | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A time series consists of a series of values or events obtained over repeated
measurements in time. Analysis of time series represents and important tool in
many application areas, such as stock market analysis, process and quality
control, observation of natural phenomena, medical treatments, etc. A vital
component in many types of time-series analysis is the choice of an appropriate
distance/similarity measure. Numerous measures have been proposed to date, with
the most successful ones based on dynamic programming. Being of quadratic time
complexity, however, global constraints are often employed to limit the search
space in the matrix during the dynamic programming procedure, in order to speed
up computation. Furthermore, it has been reported that such constrained
measures can also achieve better accuracy. In this paper, we investigate two
representative time-series distance/similarity measures based on dynamic
programming, Dynamic Time Warping (DTW) and Longest Common Subsequence (LCS),
and the effects of global constraints on them. Through extensive experiments on
a large number of time-series data sets, we demonstrate how global constrains
can significantly reduce the computation time of DTW and LCS. We also show
that, if the constraint parameter is tight enough (less than 10-15% of
time-series length), the constrained measure becomes significantly different
from its unconstrained counterpart, in the sense of producing qualitatively
different 1-nearest neighbor graphs. This observation explains the potential
for accuracy gains when using constrained measures, highlighting the need for
careful tuning of constraint parameters in order to achieve a good trade-off
between speed and accuracy.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2011 08:05:40 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Dec 2013 11:47:39 GMT"
}
] | 1,388,361,600,000 | [
[
"Kurbalija",
"Vladimir",
""
],
[
"Radovanović",
"Miloš",
""
],
[
"Geler",
"Zoltan",
""
],
[
"Ivanović",
"Mirjana",
""
]
] |
1107.0194 | Jitesh Dundas | Jitesh Dundas | Law of Connectivity in Machine Learning | Keywords- Machine Learning; unknown entities; independence;
interaction; coverage, silent connections; ISSN 1473-804x online, 1473-8031
print | I. J. of SIMULATION Vol. 11 No 5 1-10 Dec 2010 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in this paper our law that there is always a connection present
between two entities, with a selfconnection being present at least in each
node. An entity is an object, physical or imaginary, that is connected by a
path (or connection) and which is important for achieving the desired result of
the scenario. In machine learning, we state that for any scenario, a subject
entity is always, directly or indirectly, connected and affected by single or
multiple independent / dependent entities, and their impact on the subject
entity is dependent on various factors falling into the categories such as the
existenc
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2011 11:08:32 GMT"
}
] | 1,309,737,600,000 | [
[
"Dundas",
"Jitesh",
""
]
] |
1107.0268 | Mladen Nikolic | Mladen Nikolic, Filip Maric, Predrag Janicic | Simple Algorithm Portfolio for SAT | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The importance of algorithm portfolio techniques for SAT has long been noted,
and a number of very successful systems have been devised, including the most
successful one --- SATzilla. However, all these systems are quite complex (to
understand, reimplement, or modify). In this paper we propose a new algorithm
portfolio for SAT that is extremely simple, but in the same time so efficient
that it outperforms SATzilla. For a new SAT instance to be solved, our
portfolio finds its k-nearest neighbors from the training set and invokes a
solver that performs the best at those instances. The main distinguishing
feature of our algorithm portfolio is the locality of the selection procedure
--- the selection of a SAT solver is based only on few instances similar to the
input one.
| [
{
"version": "v1",
"created": "Fri, 1 Jul 2011 16:20:44 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Dec 2011 14:38:07 GMT"
}
] | 1,323,820,800,000 | [
[
"Nikolic",
"Mladen",
""
],
[
"Maric",
"Filip",
""
],
[
"Janicic",
"Predrag",
""
]
] |
1107.1020 | Jun-Yi Chai | Junyi Chai, James N.K. Liu | A Novel Multicriteria Group Decision Making Approach With Intuitionistic
Fuzzy SIR Method | Paper presented at the 2010 World Automation Congress | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The superiority and inferiority ranking (SIR) method is a generation of the
well-known PROMETHEE method, which can be more efficient to deal with
multi-criterion decision making (MCDM) problem. Intuitionistic fuzzy sets
(IFSs), as an important extension of fuzzy sets (IFs), include both membership
functions and non-membership functions and can be used to, more precisely
describe uncertain information. In real world, decision situations are usually
under uncertain environment and involve multiple individuals who have their own
points of view on handing of decision problems. In order to solve uncertainty
group MCDM problem, we propose a novel intuitionistic fuzzy SIR method in this
paper. This approach uses intuitionistic fuzzy aggregation operators and SIR
ranking methods to handle uncertain information; integrate individual opinions
into group opinions; make decisions on multiple-criterion; and finally
structure a specific decision map. The proposed approach is illustrated in a
simulation of group decision making problem related to supply chain management.
| [
{
"version": "v1",
"created": "Wed, 6 Jul 2011 03:32:21 GMT"
}
] | 1,309,996,800,000 | [
[
"Chai",
"Junyi",
""
],
[
"Liu",
"James N. K.",
""
]
] |
1107.1686 | Carlos Damasio | Carlos Viegas Dam\'asio, Alun Preece, Umberto Straccia | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI) | HTML file with clickable links to papers | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This volume contains the papers presented at the first edition of the
Doctoral Consortium of the 5th International Symposium on Rules (RuleML
2011@IJCAI) held on July 19th, 2011 in Barcelona, as well as the poster session
papers of the RuleML 2011@IJCAI main conference.
| [
{
"version": "v1",
"created": "Fri, 8 Jul 2011 18:00:49 GMT"
}
] | 1,310,342,400,000 | [
[
"Damásio",
"Carlos Viegas",
""
],
[
"Preece",
"Alun",
""
],
[
"Straccia",
"Umberto",
""
]
] |
1107.1950 | Gopalakrishnan Tr Nair | Dr T.R. Gopalakrishnan Nair, Meenakshi Malhotra | Knowledge Embedding and Retrieval Strategies in an Informledge System | 5 pages, 7 pages, International Conferenceon Information and
Knowledge Management (ICIKM-IEEE), Haikou, China, 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Informledge System (ILS) is a knowledge network with autonomous nodes and
intelligent links that integrate and structure the pieces of knowledge. In this
paper, we put forward the strategies for knowledge embedding and retrieval in
an ILS. ILS is a powerful knowledge network system dealing with logical storage
and connectivity of information units to form knowledge using autonomous nodes
and multi-lateral links. In ILS, the autonomous nodes known as Knowledge
Network Nodes (KNN)s play vital roles which are not only used in storage,
parsing and in forming the multi-lateral linkages between knowledge points but
also in helping the realization of intelligent retrieval of linked information
units in the form of knowledge. Knowledge built in to the ILS forms the shape
of sphere. The intelligence incorporated into the links of a KNN helps in
retrieving various knowledge threads from a specific set of KNNs. A developed
entity of information realized through KNN forms in to the shape of a knowledge
cone
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 07:13:43 GMT"
}
] | 1,310,428,800,000 | [
[
"Nair",
"Dr T. R. Gopalakrishnan",
""
],
[
"Malhotra",
"Meenakshi",
""
]
] |
1107.2086 | Carlos Viegas Dam\'asio | Elisa Marengo, Matteo Baldoni, and Cristina Baroglio | Extend Commitment Protocols with Temporal Regulations: Why and How | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI), pages 1-8
(arXiv:1107.1686) | null | null | RuleML-DC/2011/01 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proposal of Elisa Marengo's thesis is to extend commitment protocols to
explicitly account for temporal regulations. This extension will satisfy two
needs: (1) it will allow representing, in a flexible and modular way, temporal
regulations with a normative force, posed on the interaction, so as to
represent conventions, laws and suchlike; (2) it will allow committing to
complex conditions, which describe not only what will be achieved but to some
extent also how. These two aspects will be deeply investigated in the proposal
of a unified framework, which is part of the ongoing work and will be included
in the thesis.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 18:48:59 GMT"
}
] | 1,426,723,200,000 | [
[
"Marengo",
"Elisa",
""
],
[
"Baldoni",
"Matteo",
""
],
[
"Baroglio",
"Cristina",
""
]
] |
1107.2087 | Carlos Viegas Dam\'asio | Przemyslaw Woznowski, Alun Preece | Rule-Based Semantic Sensing | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI), pages 9-16
(arXiv:1107.1686) | null | null | RuleML-DC/2011/02 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rule-Based Systems have been in use for decades to solve a variety of
problems but not in the sensor informatics domain. Rules aid the aggregation of
low-level sensor readings to form a more complete picture of the real world and
help to address 10 identified challenges for sensor network middleware. This
paper presents the reader with an overview of a system architecture and a pilot
application to demonstrate the usefulness of a system integrating rules with
sensor middleware.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 18:50:19 GMT"
}
] | 1,426,723,200,000 | [
[
"Woznowski",
"Przemyslaw",
""
],
[
"Preece",
"Alun",
""
]
] |
1107.2088 | Carlos Viegas Dam\'asio | Antonius Weinzierl | Advancing Multi-Context Systems by Inconsistency Management | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI), pages 17-24
(arXiv:1107.1686) | null | null | RuleML-DC/2011/03 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-Context Systems are an expressive formalism to model (possibly)
non-monotonic information exchange between heterogeneous knowledge bases. Such
information exchange, however, often comes with unforseen side-effects leading
to violation of constraints, making the system inconsistent, and thus unusable.
Although there are many approaches to assess and repair a single inconsistent
knowledge base, the heterogeneous nature of Multi-Context Systems poses
problems which have not yet been addressed in a satisfying way: How to identify
and explain a inconsistency that spreads over multiple knowledge bases with
different logical formalisms (e.g., logic programs and ontologies)? What are
the causes of inconsistency if inference/information exchange is non-monotonic
(e.g., absent information as cause)? How to deal with inconsistency if access
to knowledge bases is restricted (e.g., companies exchange information, but do
not allow arbitrary modifications to their knowledge bases)? Many traditional
approaches solely aim for a consistent system, but automatic removal of
inconsistency is not always desireable. Therefore a human operator has to be
supported in finding the erroneous parts contributing to the inconsistency. In
my thesis those issues will be adressed mainly from a foundational perspective,
while our research project also provides algorithms and prototype
implementations.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 18:52:29 GMT"
}
] | 1,426,723,200,000 | [
[
"Weinzierl",
"Antonius",
""
]
] |
1107.2089 | Carlos Viegas Dam\'asio | Jaroslaw Bak | Rule-based query answering method for a knowledge base of economic
crimes | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI), pages 25-32
(arXiv:1107.1686) | null | null | RuleML-DC/2011/04 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a description of the PhD thesis which aims to propose a rule-based
query answering method for relational data. In this approach we use an
additional knowledge which is represented as a set of rules and describes the
source data at concept (ontological) level. Queries are posed in the terms of
abstract level. We present two methods. The first one uses hybrid reasoning and
the second one exploits only forward chaining. These two methods are
demonstrated by the prototypical implementation of the system coupled with the
Jess engine. Tests are performed on the knowledge base of the selected economic
crimes: fraudulent disbursement and money laundering.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 18:53:32 GMT"
}
] | 1,310,428,800,000 | [
[
"Bak",
"Jaroslaw",
""
]
] |
1107.2090 | Carlos Viegas Dam\'asio | Alexander Sellner, Christopher Schwarz, Erwin Zinser | Semantic-ontological combination of Business Rules and Business
Processes in IT Service Management | Proceedings of the Doctoral Consortium and Poster Session of the 5th
International Symposium on Rules (RuleML 2011@IJCAI), pages 33-40
(arXiv:1107.1686) | null | null | RuleML-DC/2011/05 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | IT Service Management deals with managing a broad range of items related to
complex system environments. As there is both, a close connection to business
interests and IT infrastructure, the application of semantic expressions which
are seamlessly integrated within applications for managing ITSM environments,
can help to improve transparency and profitability. This paper focuses on the
challenges regarding the integration of semantics and ontologies within ITSM
environments. It will describe the paradigm of relationships and inheritance
within complex service trees and will present an approach of ontologically
expressing them. Furthermore, the application of SBVR-based rules as executable
SQL triggers will be discussed. Finally, the broad range of topics for further
research, derived from the findings, will be presented.
| [
{
"version": "v1",
"created": "Mon, 11 Jul 2011 18:54:36 GMT"
}
] | 1,310,428,800,000 | [
[
"Sellner",
"Alexander",
""
],
[
"Schwarz",
"Christopher",
""
],
[
"Zinser",
"Erwin",
""
]
] |
1107.2997 | Jun-Yi Chai | Junyi Chai, James N.K. Liu | An Ontology-driven Framework for Supporting Complex Decision Process | Paper presented at the 2010 World Automation Congress | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study proposes a framework of ONTOlogy-based Group Decision Support
System (ONTOGDSS) for decision process which exhibits the complex structure of
decision-problem and decision-group. It is capable of reducing the complexity
of problem structure and group relations. The system allows decision makers to
participate in group decision-making through the web environment, via the
ontology relation. It facilitates the management of decision process as a
whole, from criteria generation, alternative evaluation, and opinion
interaction to decision aggregation. The embedded ontology structure in
ONTOGDSS provides the important formal description features to facilitate
decision analysis and verification. It examines the software architecture, the
selection methods, the decision path, etc. Finally, the ontology application of
this system is illustrated with specific real case to demonstrate its
potentials towards decision-making development.
| [
{
"version": "v1",
"created": "Fri, 15 Jul 2011 07:17:08 GMT"
}
] | 1,310,947,200,000 | [
[
"Chai",
"Junyi",
""
],
[
"Liu",
"James N. K.",
""
]
] |
1107.3302 | Mahdaoui Rafik | Rafik Mahdaoui, Leila Hayet Mouss, Mohamed Djamel Mouss, Ouahiba
Chouhal | A Temporal Neuro-Fuzzy Monitoring System to Manufacturing Systems | 10 pages, 11 figures, IJCSI International Journal of Computer Science
Issues, Vol. 8, Issue 3, No. 1, May 2011 ISSN (Online): 1694-0814
www.IJCSI.org | IJCSI International Journal of Computer Science Issues, Vol. 8,
Issue 3, No. 1, May 2011 ISSN (Online): 1694-0814 www.IJCSI.org | null | IJCSI-8-3-1-237-246.pdf | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault diagnosis and failure prognosis are essential techniques in improving
the safety of many manufacturing systems. Therefore, on-line fault detection
and isolation is one of the most important tasks in safety-critical and
intelligent control systems. Computational intelligence techniques are being
investigated as extension of the traditional fault diagnosis methods. This
paper discusses the Temporal Neuro-Fuzzy Systems (TNFS) fault diagnosis within
an application study of a manufacturing system. The key issues of finding a
suitable structure for detecting and isolating ten realistic actuator faults
are described. Within this framework, data-processing interactive software of
simulation baptized NEFDIAG (NEuro Fuzzy DIAGnosis) version 1.0 is developed.
This software devoted primarily to creation, training and test of a
classification Neuro-Fuzzy system of industrial process failures. NEFDIAG can
be represented like a special type of fuzzy perceptron, with three layers used
to classify patterns and failures. The system selected is the workshop of
SCIMAT clinker, cement factory in Algeria.
| [
{
"version": "v1",
"created": "Sun, 17 Jul 2011 14:13:34 GMT"
}
] | 1,311,033,600,000 | [
[
"Mahdaoui",
"Rafik",
""
],
[
"Mouss",
"Leila Hayet",
""
],
[
"Mouss",
"Mohamed Djamel",
""
],
[
"Chouhal",
"Ouahiba",
""
]
] |
1107.3663 | Antoine Bordes | Antoine Bordes, Xavier Glorot, Jason Weston, Yoshua Bengio | Towards Open-Text Semantic Parsing via Multi-Task Learning of Structured
Embeddings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Open-text (or open-domain) semantic parsers are designed to interpret any
statement in natural language by inferring a corresponding meaning
representation (MR). Unfortunately, large scale systems cannot be easily
machine-learned due to lack of directly supervised data. We propose here a
method that learns to assign MRs to a wide range of text (using a dictionary of
more than 70,000 words, which are mapped to more than 40,000 entities) thanks
to a training scheme that combines learning from WordNet and ConceptNet with
learning from raw text. The model learns structured embeddings of words,
entities and MRs via a multi-task training process operating on these diverse
sources of data that integrates all the learnt knowledge into a single system.
This work ends up combining methods for knowledge acquisition, semantic
parsing, and word-sense disambiguation. Experiments on various tasks indicate
that our approach is indeed successful and can form a basis for future more
sophisticated systems.
| [
{
"version": "v1",
"created": "Tue, 19 Jul 2011 09:44:09 GMT"
}
] | 1,311,120,000,000 | [
[
"Bordes",
"Antoine",
""
],
[
"Glorot",
"Xavier",
""
],
[
"Weston",
"Jason",
""
],
[
"Bengio",
"Yoshua",
""
]
] |
1107.3894 | Lu Dang Khoa Nguyen | Nguyen Lu Dang Khoa and Sanjay Chawla | Online Anomaly Detection Systems Using Incremental Commute Time | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Commute Time Distance (CTD) is a random walk based metric on graphs. CTD has
found widespread applications in many domains including personalized search,
collaborative filtering and making search engines robust against manipulation.
Our interest is inspired by the use of CTD as a metric for anomaly detection.
It has been shown that CTD can be used to simultaneously identify both global
and local anomalies. Here we propose an accurate and efficient approximation
for computing the CTD in an incremental fashion in order to facilitate
real-time applications. An online anomaly detection algorithm is designed where
the CTD of each new arriving data point to any point in the current graph can
be estimated in constant time ensuring a real-time response. Moreover, the
proposed approach can also be applied in many other applications that utilize
commute time distance.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2011 05:35:40 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Jul 2011 06:37:01 GMT"
}
] | 1,311,811,200,000 | [
[
"Khoa",
"Nguyen Lu Dang",
""
],
[
"Chawla",
"Sanjay",
""
]
] |
1107.4035 | David Poole | David Poole, Fahiem Bacchus, Jacek Kisynski | Towards Completely Lifted Search-based Probabilistic Inference | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The promise of lifted probabilistic inference is to carry out probabilistic
inference in a relational probabilistic model without needing to reason about
each individual separately (grounding out the representation) by treating the
undistinguished individuals as a block. Current exact methods still need to
ground out in some cases, typically because the representation of the
intermediate results is not closed under the lifted operations. We set out to
answer the question as to whether there is some fundamental reason why lifted
algorithms would need to ground out undifferentiated individuals. We have two
main results: (1) We completely characterize the cases where grounding is
polynomial in a population size, and show how we can do lifted inference in
time polynomial in the logarithm of the population size for these cases. (2)
For the case of no-argument and single-argument parametrized random variables
where the grounding is not polynomial in a population size, we present lifted
inference which is polynomial in the population size whereas grounding is
exponential. Neither of these cases requires reasoning separately about the
individuals that are not explicitly mentioned.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2011 17:04:12 GMT"
},
{
"version": "v2",
"created": "Thu, 21 Jul 2011 15:01:14 GMT"
}
] | 1,311,292,800,000 | [
[
"Poole",
"David",
""
],
[
"Bacchus",
"Fahiem",
""
],
[
"Kisynski",
"Jacek",
""
]
] |
1107.4161 | Sebastien Verel | Fabio Daolio (ISI), S\'ebastien Verel (INRIA Lille - Nord Europe),
Gabriela Ochoa, Marco Tomassini (ISI) | Local Optima Networks of the Quadratic Assignment Problem | null | IEEE world conference on computational intelligence (WCCI - CEC),
Barcelona : Spain (2010) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a recently proposed model for combinatorial landscapes, Local Optima
Networks (LON), we conduct a thorough analysis of two types of instances of the
Quadratic Assignment Problem (QAP). This network model is a reduction of the
landscape in which the nodes correspond to the local optima, and the edges
account for the notion of adjacency between their basins of attraction. The
model was inspired by the notion of 'inherent network' of potential energy
surfaces proposed in physical-chemistry. The local optima networks extracted
from the so called uniform and real-like QAP instances, show features clearly
distinguishing these two types of instances. Apart from a clear confirmation
that the search difficulty increases with the problem dimension, the analysis
provides new confirming evidence explaining why the real-like instances are
easier to solve exactly using heuristic search, while the uniform instances are
easier to solve approximately. Although the local optima network model is still
under development, we argue that it provides a novel view of combinatorial
landscapes, opening up the possibilities for new analytical tools and
understanding of problem difficulty in combinatorial optimization.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2011 05:07:25 GMT"
}
] | 1,311,292,800,000 | [
[
"Daolio",
"Fabio",
"",
"ISI"
],
[
"Verel",
"Sébastien",
"",
"INRIA Lille - Nord Europe"
],
[
"Ochoa",
"Gabriela",
"",
"ISI"
],
[
"Tomassini",
"Marco",
"",
"ISI"
]
] |
1107.4162 | Sebastien Verel | S\'ebastien Verel (INRIA Lille - Nord Europe), Gabriela Ochoa, Marco
Tomassini (ISI) | Local Optima Networks of NK Landscapes with Neutrality | IEEE Transactions on Evolutionary Computation volume 14, 6 (2010) to
appear | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous work we have introduced a network-based model that abstracts many
details of the underlying landscape and compresses the landscape information
into a weighted, oriented graph which we call the local optima network. The
vertices of this graph are the local optima of the given fitness landscape,
while the arcs are transition probabilities between local optima basins. Here
we extend this formalism to neutral fitness landscapes, which are common in
difficult combinatorial search spaces. By using two known neutral variants of
the NK family (i.e. NKp and NKq) in which the amount of neutrality can be tuned
by a parameter, we show that our new definitions of the optima networks and the
associated basins are consistent with the previous definitions for the
non-neutral case. Moreover, our empirical study and statistical analysis show
that the features of neutral landscapes interpolate smoothly between landscapes
with maximum neutrality and non-neutral ones. We found some unknown structural
differences between the two studied families of neutral landscapes. But
overall, the network features studied confirmed that neutrality, in landscapes
with percolating neutral networks, may enhance heuristic search. Our current
methodology requires the exhaustive enumeration of the underlying search space.
Therefore, sampling techniques should be developed before this analysis can
have practical implications. We argue, however, that the proposed model offers
a new perspective into the problem difficulty of combinatorial optimization
problems and may inspire the design of more effective search heuristics.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2011 05:08:03 GMT"
}
] | 1,311,292,800,000 | [
[
"Verel",
"Sébastien",
"",
"INRIA Lille - Nord Europe"
],
[
"Ochoa",
"Gabriela",
"",
"ISI"
],
[
"Tomassini",
"Marco",
"",
"ISI"
]
] |
1107.4163 | Sebastien Verel | David Simoncini, S\'ebastien Verel, Philippe Collard, Manuel Clergue | Centric selection: a way to tune the exploration/exploitation trade-off | null | GECCO'09, Montreal : Canada (2009) | 10.1145/1569901.1570023 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the exploration / exploitation trade-off in cellular
genetic algorithms. We define a new selection scheme, the centric selection,
which is tunable and allows controlling the selective pressure with a single
parameter. The equilibrium model is used to study the influence of the centric
selection on the selective pressure and a new model which takes into account
problem dependent statistics and selective pressure in order to deal with the
exploration / exploitation trade-off is proposed: the punctuated equilibria
model. Performances on the quadratic assignment problem and NK-Landscapes put
in evidence an optimal exploration / exploitation trade-off on both of the
classes of problems. The punctuated equilibria model is used to explain these
results.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2011 05:08:30 GMT"
}
] | 1,311,292,800,000 | [
[
"Simoncini",
"David",
""
],
[
"Verel",
"Sébastien",
""
],
[
"Collard",
"Philippe",
""
],
[
"Clergue",
"Manuel",
""
]
] |
1107.4164 | Sebastien Verel | Leonardo Vanneschi (DISCo), S\'ebastien Verel, Philippe Collard, Marco
Tomassini (ISI) | NK landscapes difficulty and Negative Slope Coefficient: How Sampling
Influences the Results | null | evoNum workshop of evostar conference, Tubingen : Germany (2009) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Negative Slope Coefficient is an indicator of problem hardness that has been
introduced in 2004 and that has returned promising results on a large set of
problems. It is based on the concept of fitness cloud and works by partitioning
the cloud into a number of bins representing as many different regions of the
fitness landscape. The measure is calculated by joining the bins centroids by
segments and summing all their negative slopes. In this paper, for the first
time, we point out a potential problem of the Negative Slope Coefficient: we
study its value for different instances of the well known NK-landscapes and we
show how this indicator is dramatically influenced by the minimum number of
points contained into a bin. Successively, we formally justify this behavior of
the Negative Slope Coefficient and we discuss pros and cons of this measure.
| [
{
"version": "v1",
"created": "Thu, 21 Jul 2011 05:08:50 GMT"
}
] | 1,311,292,800,000 | [
[
"Vanneschi",
"Leonardo",
"",
"DISCo"
],
[
"Verel",
"Sébastien",
"",
"ISI"
],
[
"Collard",
"Philippe",
"",
"ISI"
],
[
"Tomassini",
"Marco",
"",
"ISI"
]
] |
1107.4303 | Kostyantyn Shchekotykhin | Kostyantyn Shchekotykhin, Gerhard Friedrich, Philipp Fleiss, Patrick
Rodler | Interactive ontology debugging: two query strategies for efficient fault
localization | Published in Web Semantics: Science, Services and Agents on the World
Wide Web. arXiv admin note: substantial text overlap with arXiv:1004.5339 | Journal of Web Semantics 12 (2012) 88-103 | 10.1016/j.websem.2011.12.006 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective debugging of ontologies is an important prerequisite for their
broad application, especially in areas that rely on everyday users to create
and maintain knowledge bases, such as the Semantic Web. In such systems
ontologies capture formalized vocabularies of terms shared by its users.
However in many cases users have different local views of the domain, i.e. of
the context in which a given term is used. Inappropriate usage of terms
together with natural complications when formulating and understanding logical
descriptions may result in faulty ontologies. Recent ontology debugging
approaches use diagnosis methods to identify causes of the faults. In most
debugging scenarios these methods return many alternative diagnoses, thus
placing the burden of fault localization on the user. This paper demonstrates
how the target diagnosis can be identified by performing a sequence of
observations, that is, by querying an oracle about entailments of the target
ontology. To identify the best query we propose two query selection strategies:
a simple "split-in-half" strategy and an entropy-based strategy. The latter
allows knowledge about typical user errors to be exploited to minimize the
number of queries. Our evaluation showed that the entropy-based method
significantly reduces the number of required queries compared to the
"split-in-half" approach. We experimented with different probability
distributions of user errors and different qualities of the a-priori
probabilities. Our measurements demonstrated the superiority of entropy-based
query selection even in cases where all fault probabilities are equal, i.e.
where no information about typical user errors is available.
| [
{
"version": "v1",
"created": "Wed, 20 Jul 2011 10:02:07 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Apr 2014 10:20:14 GMT"
}
] | 1,398,729,600,000 | [
[
"Shchekotykhin",
"Kostyantyn",
""
],
[
"Friedrich",
"Gerhard",
""
],
[
"Fleiss",
"Philipp",
""
],
[
"Rodler",
"Patrick",
""
]
] |
1107.4502 | Jerome Euzenat | Fran\c{c}ois Scharffe (LIRMM), J\'er\^ome Euzenat (INRIA Grenoble
Rh\^one-Alpes / LIG Laboratoire d'Informatique de Grenoble) | MeLinDa: an interlinking framework for the web of data | N° RR-7691 (2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The web of data consists of data published on the web in such a way that they
can be interpreted and connected together. It is thus critical to establish
links between these data, both for the web of data and for the semantic web
that it contributes to feed. We consider here the various techniques developed
for that purpose and analyze their commonalities and differences. We propose a
general framework and show how the diverse techniques fit in the framework.
From this framework we consider the relation between data interlinking and
ontology matching. Although, they can be considered similar at a certain level
(they both relate formal entities), they serve different purposes, but would
find a mutual benefit at collaborating. We thus present a scheme under which it
is possible for data linking tools to take advantage of ontology alignments.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2011 12:48:32 GMT"
}
] | 1,311,552,000,000 | [
[
"Scharffe",
"François",
"",
"LIRMM"
],
[
"Euzenat",
"Jérôme",
"",
"INRIA Grenoble\n Rhône-Alpes / LIG Laboratoire d'Informatique de Grenoble"
]
] |
1107.4553 | Thierry Boy de la Tour | Thierry Boy de la Tour, Mnacho Echenim | Solving Linear Constraints in Elementary Abelian p-Groups of Symmetries | 18 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetries occur naturally in CSP or SAT problems and are not very difficult
to discover, but using them to prune the search space tends to be very
challenging. Indeed, this usually requires finding specific elements in a group
of symmetries that can be huge, and the problem of their very existence is
NP-hard. We formulate such an existence problem as a constraint problem on one
variable (the symmetry to be used) ranging over a group, and try to find
restrictions that may be solved in polynomial time. By considering a simple
form of constraints (restricted by a cardinality k) and the class of groups
that have the structure of Fp-vector spaces, we propose a partial algorithm
based on linear algebra. This polynomial algorithm always applies when k=p=2,
but may fail otherwise as we prove the problem to be NP-hard for all other
values of k and p. Experiments show that this approach though restricted should
allow for an efficient use of at least some groups of symmetries. We conclude
with a few directions to be explored to efficiently solve this problem on the
general case.
| [
{
"version": "v1",
"created": "Fri, 22 Jul 2011 15:52:26 GMT"
}
] | 1,311,552,000,000 | [
[
"de la Tour",
"Thierry Boy",
""
],
[
"Echenim",
"Mnacho",
""
]
] |
1107.4865 | Joost Vennekens | Joost Vennekens | Actual Causation in CP-logic | null | Theory and Practice of Logic Programming, 27th Int'l. Conference
on Logic Programming (ICLP'11) Special Issue, volume 11, issue 4-5,
p.647-662, 2011 | 10.1017/S1471068411000226 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a causal model of some domain and a particular story that has taken
place in this domain, the problem of actual causation is deciding which of the
possible causes for some effect actually caused it. One of the most influential
approaches to this problem has been developed by Halpern and Pearl in the
context of structural models. In this paper, I argue that this is actually not
the best setting for studying this problem. As an alternative, I offer the
probabilistic logic programming language of CP-logic. Unlike structural models,
CP-logic incorporates the deviant/default distinction that is generally
considered an important aspect of actual causation, and it has an explicitly
dynamic semantics, which helps to formalize the stories that serve as input to
an actual causation problem.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2011 08:24:50 GMT"
}
] | 1,311,638,400,000 | [
[
"Vennekens",
"Joost",
""
]
] |
1107.4937 | Nicolas Peltier | Mnacho Echenim and Nicolas Peltier | Instantiation Schemes for Nested Theories | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates under which conditions instantiation-based proof
procedures can be combined in a nested way, in order to mechanically construct
new instantiation procedures for richer theories. Interesting applications in
the field of verification are emphasized, particularly for handling extensions
of the theory of arrays.
| [
{
"version": "v1",
"created": "Mon, 25 Jul 2011 13:14:54 GMT"
}
] | 1,311,638,400,000 | [
[
"Echenim",
"Mnacho",
""
],
[
"Peltier",
"Nicolas",
""
]
] |
1107.5462 | Gabriela Ochoa | Edmund Burke, Tim Curtois, Matthew Hyde, Gabriela Ochoa, Jose A.
Vazquez-Rodriguez | HyFlex: A Benchmark Framework for Cross-domain Heuristic Search | 28 pages, 9 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automating the design of heuristic search methods is an active research field
within computer science, artificial intelligence and operational research. In
order to make these methods more generally applicable, it is important to
eliminate or reduce the role of the human expert in the process of designing an
effective methodology to solve a given computational search problem.
Researchers developing such methodologies are often constrained on the number
of problem domains on which to test their adaptive, self-configuring
algorithms; which can be explained by the inherent difficulty of implementing
their corresponding domain specific software components.
This paper presents HyFlex, a software framework for the development of
cross-domain search methodologies. The framework features a common software
interface for dealing with different combinatorial optimisation problems, and
provides the algorithm components that are problem specific. In this way, the
algorithm designer does not require a detailed knowledge the problem domains,
and thus can concentrate his/her efforts in designing adaptive general-purpose
heuristic search algorithms. Four hard combinatorial problems are fully
implemented (maximum satisfiability, one dimensional bin packing, permutation
flow shop and personnel scheduling), each containing a varied set of instance
data (including real-world industrial applications) and an extensive set of
problem specific heuristics and search operators. The framework forms the basis
for the first International Cross-domain Heuristic Search Challenge (CHeSC),
and it is currently in use by the international research community. In summary,
HyFlex represents a valuable new benchmark of heuristic search generality, with
which adaptive cross-domain algorithms are being easily developed, and reliably
compared.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2011 13:07:39 GMT"
}
] | 1,311,811,200,000 | [
[
"Burke",
"Edmund",
""
],
[
"Curtois",
"Tim",
""
],
[
"Hyde",
"Matthew",
""
],
[
"Ochoa",
"Gabriela",
""
],
[
"Vazquez-Rodriguez",
"Jose A.",
""
]
] |
1107.5474 | Gonzalo A. Aranda-Corral | Gonzalo A. Aranda-Corral, Joaqu\'in Borrego-D\'iaz and Juan
Gal\'an-P\'aez | Selecting Attributes for Sport Forecasting using Formal Concept Analysis | Paper 3 for the Complex Systems in Sports Workshop 2011 (CS-Sports
2011) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | In order to address complex systems, apply pattern recongnition on their
evolution could play an key role to understand their dynamics. Global patterns
are required to detect emergent concepts and trends, some of them with
qualitative nature. Formal Concept Analysis (FCA) is a theory whose goal is to
discover and to extract Knowledge from qualitative data. It provides tools for
reasoning with implication basis (and association rules). Implications and
association rules are usefull to reasoning on previously selected attributes,
providing a formal foundation for logical reasoning. In this paper we analyse
how to apply FCA reasoning to increase confidence in sports betting, by means
of detecting temporal regularities from data. It is applied to build a
Knowledge-Based system for confidence reasoning.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2011 13:52:20 GMT"
},
{
"version": "v2",
"created": "Thu, 4 Aug 2011 11:46:30 GMT"
}
] | 1,312,502,400,000 | [
[
"Aranda-Corral",
"Gonzalo A.",
""
],
[
"Borrego-Díaz",
"Joaquín",
""
],
[
"Galán-Páez",
"Juan",
""
]
] |
1107.5766 | Pedro Alejandro Ortega | Pedro A. Ortega and Daniel A. Braun | Information, Utility & Bounded Rationality | 10 pages. The original publication is available at
www.springerlink.com | The Fourth Conference on General Artificial Intelligence (AGI-11),
2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perfectly rational decision-makers maximize expected utility, but crucially
ignore the resource costs incurred when determining optimal actions. Here we
employ an axiomatic framework for bounded rational decision-making based on a
thermodynamic interpretation of resource costs as information costs. This leads
to a variational "free utility" principle akin to thermodynamical free energy
that trades off utility and information costs. We show that bounded optimal
control solutions can be derived from this variational principle, which leads
in general to stochastic policies. Furthermore, we show that risk-sensitive and
robust (minimax) control schemes fall out naturally from this framework if the
environment is considered as a bounded rational and perfectly rational
opponent, respectively. When resource costs are ignored, the maximum expected
utility principle is recovered.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2011 16:53:15 GMT"
}
] | 1,311,897,600,000 | [
[
"Ortega",
"Pedro A.",
""
],
[
"Braun",
"Daniel A.",
""
]
] |
1107.5930 | C\`esar Ferri | Jos\'e Hern\'andez-Orallo, Peter Flach, C\`esar Ferri | Technical Note: Towards ROC Curves in Cost Space | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | ROC curves and cost curves are two popular ways of visualising classifier
performance, finding appropriate thresholds according to the operating
condition, and deriving useful aggregated measures such as the area under the
ROC curve (AUC) or the area under the optimal cost curve. In this note we
present some new findings and connections between ROC space and cost space, by
using the expected loss over a range of operating conditions. In particular, we
show that ROC curves can be transferred to cost space by means of a very
natural way of understanding how thresholds should be chosen, by selecting the
threshold such that the proportion of positive predictions equals the operating
condition (either in the form of cost proportion or skew). We call these new
curves {ROC Cost Curves}, and we demonstrate that the expected loss as measured
by the area under these curves is linearly related to AUC. This opens up a
series of new possibilities and clarifies the notion of cost curve and its
relation to ROC analysis. In addition, we show that for a classifier that
assigns the scores in an evenly-spaced way, these curves are equal to the Brier
Curves. As a result, this establishes the first clear connection between AUC
and the Brier score.
| [
{
"version": "v1",
"created": "Fri, 29 Jul 2011 11:03:38 GMT"
}
] | 1,312,156,800,000 | [
[
"Hernández-Orallo",
"José",
""
],
[
"Flach",
"Peter",
""
],
[
"Ferri",
"Cèsar",
""
]
] |
1108.0155 | Michael Schneider | Michael Schneider, Geoff Sutcliffe | Reasoning in the OWL 2 Full Ontology Language using First-Order
Automated Theorem Proving | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | OWL 2 has been standardized by the World Wide Web Consortium (W3C) as a
family of ontology languages for the Semantic Web. The most expressive of these
languages is OWL 2 Full, but to date no reasoner has been implemented for this
language. Consistency and entailment checking are known to be undecidable for
OWL 2 Full. We have translated a large fragment of the OWL 2 Full semantics
into first-order logic, and used automated theorem proving systems to do
reasoning based on this theory. The results are promising, and indicate that
this approach can be applied in practice for effective OWL reasoning, beyond
the capabilities of current Semantic Web reasoners.
This is an extended version of a paper with the same title that has been
published at CADE 2011, LNAI 6803, pp. 446-460. The extended version provides
appendices with additional resources that were used in the reported evaluation.
| [
{
"version": "v1",
"created": "Sun, 31 Jul 2011 07:51:02 GMT"
}
] | 1,312,243,200,000 | [
[
"Schneider",
"Michael",
""
],
[
"Sutcliffe",
"Geoff",
""
]
] |
1108.1488 | Paola Di Maio | P. Di Maio | 'Just Enough' Ontology Engineering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces 'just enough' principles and 'systems engineering'
approach to the practice of ontology development to provide a minimal yet
complete, lightweight, agile and integrated development process, supportive of
stakeholder management and implementation independence.
| [
{
"version": "v1",
"created": "Sat, 6 Aug 2011 15:21:05 GMT"
}
] | 1,312,848,000,000 | [
[
"Di Maio",
"P.",
""
]
] |
1108.2865 | Norbert B\'atfai | Norbert B\'atfai | Conscious Machines and Consciousness Oriented Programming | 25 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate the following question: how could you write
such computer programs that can work like conscious beings? The motivation
behind this question is that we want to create such applications that can see
the future. The aim of this paper is to provide an overall conceptual framework
for this new approach to machine consciousness. So we introduce a new
programming paradigm called Consciousness Oriented Programming (COP).
| [
{
"version": "v1",
"created": "Sun, 14 Aug 2011 12:27:39 GMT"
}
] | 1,313,452,800,000 | [
[
"Bátfai",
"Norbert",
""
]
] |
1108.3019 | Uwe Aickelin | Peer-Olaf Siebers, Uwe Aickelin | A First Approach on Modelling Staff Proactiveness in Retail Simulation
Models | 25 pages, 3 figures, 10 tables | Journal of Artificial Societies and Social Simulation, 14 (2),
pages 1-25, 2011 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been a noticeable shift in the relative composition of the industry
in the developed countries in recent years; manufacturing is decreasing while
the service sector is becoming more important. However, currently most
simulation models for investigating service systems are still built in the same
way as manufacturing simulation models, using a process-oriented world view,
i.e. they model the flow of passive entities through a system. These kinds of
models allow studying aspects of operational management but are not well suited
for studying the dynamics that appear in service systems due to human
behaviour. For these kinds of studies we require tools that allow modelling the
system and entities using an object-oriented world view, where intelligent
objects serve as abstract "actors" that are goal directed and can behave
proactively. In our work we combine process-oriented discrete event simulation
modelling and object-oriented agent based simulation modelling to investigate
the impact of people management practices on retail productivity. In this
paper, we reveal in a series of experiments what impact considering proactivity
can have on the output accuracy of simulation models of human centric systems.
The model and data we use for this investigation are based on a case study in a
UK department store. We show that considering proactivity positively influences
the validity of these kinds of models and therefore allows analysts to make
better recommendations regarding strategies to apply people management
practises.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2011 15:25:15 GMT"
}
] | 1,313,452,800,000 | [
[
"Siebers",
"Peer-Olaf",
""
],
[
"Aickelin",
"Uwe",
""
]
] |
1108.3278 | Miroslaw Truszczynski | Marc Denecker, Victor W. Marek and Miroslaw Truszczynski | Reiter's Default Logic Is a Logic of Autoepistemic Reasoning And a Good
One, Too | In G. Brewka, V.M. Marek, and M. Truszczynski, eds. Nonmonotonic
Reasoning -- Essays Celebrating its 30th Anniversary, College Publications,
2011 (a volume of papers presented at NonMOn at 30 meeting, Lexington, KY,
USA, October 2010 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fact apparently not observed earlier in the literature of nonmonotonic
reasoning is that Reiter, in his default logic paper, did not directly
formalize informal defaults. Instead, he translated a default into a certain
natural language proposition and provided a formalization of the latter. A few
years later, Moore noted that propositions like the one used by Reiter are
fundamentally different than defaults and exhibit a certain autoepistemic
nature. Thus, Reiter had developed his default logic as a formalization of
autoepistemic propositions rather than of defaults.
The first goal of this paper is to show that some problems of Reiter's
default logic as a formal way to reason about informal defaults are directly
attributable to the autoepistemic nature of default logic and to the mismatch
between informal defaults and the Reiter's formal defaults, the latter being a
formal expression of the autoepistemic propositions Reiter used as a
representation of informal defaults.
The second goal of our paper is to compare the work of Reiter and Moore.
While each of them attempted to formalize autoepistemic propositions, the modes
of reasoning in their respective logics were different. We revisit Moore's and
Reiter's intuitions and present them from the perspective of autotheoremhood,
where theories can include propositions referring to the theory's own theorems.
We then discuss the formalization of this perspective in the logics of Moore
and Reiter, respectively, using the unifying semantic framework for default and
autoepistemic logics that we developed earlier. We argue that Reiter's default
logic is a better formalization of Moore's intuitions about autoepistemic
propositions than Moore's own autoepistemic logic.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2011 16:48:31 GMT"
}
] | 1,313,539,200,000 | [
[
"Denecker",
"Marc",
""
],
[
"Marek",
"Victor W.",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
1108.3279 | Miroslaw Truszczynski | Miroslaw Truszczynski | Revisiting Epistemic Specifications | In Marcello Balduccini and Tran Cao Son, Editors, Essays Dedicated to
Michael Gelfond on the Occasion of His 65th Birthday, Lexington, KY, USA,
October 2010, LNAI 6565, Springer | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In 1991, Michael Gelfond introduced the language of epistemic specifications.
The goal was to develop tools for modeling problems that require some form of
meta-reasoning, that is, reasoning over multiple possible worlds. Despite their
relevance to knowledge representation, epistemic specifications have received
relatively little attention so far. In this paper, we revisit the formalism of
epistemic specification. We offer a new definition of the formalism, propose
several semantics (one of which, under syntactic restrictions we assume, turns
out to be equivalent to the original semantics by Gelfond), derive some
complexity results and, finally, show the effectiveness of the formalism for
modeling problems requiring meta-reasoning considered recently by Faber and
Woltran. All these results show that epistemic specifications deserve much more
attention that has been afforded to them so far.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2011 16:49:26 GMT"
}
] | 1,313,539,200,000 | [
[
"Truszczynski",
"Miroslaw",
""
]
] |
1108.3281 | Miroslaw Truszczynski | Victor W. Marek, Ilkka Niemela and Miroslaw Truszczynski | Origins of Answer-Set Programming - Some Background And Two Personal
Accounts | In G. Brewka, V.M. Marek, and M. Truszczynski, eds. Nonmonotonic
Reasoning -- Essays Celebrating its 30th Anniversary, College Publications,
2011 (a volume of papers presented at NonMon at 30 meeting, Lexington, KY,
USA, October 2010) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the evolution of aspects of nonmonotonic reasoning towards the
computational paradigm of answer-set programming (ASP). We give a general
overview of the roots of ASP and follow up with the personal perspective on
research developments that helped verbalize the main principles of ASP and
differentiated it from the classical logic programming.
| [
{
"version": "v1",
"created": "Tue, 16 Aug 2011 16:53:41 GMT"
}
] | 1,313,539,200,000 | [
[
"Marek",
"Victor W.",
""
],
[
"Niemela",
"Ilkka",
""
],
[
"Truszczynski",
"Miroslaw",
""
]
] |
1108.3711 | David Tolpin | David Tolpin, Solomon Eyal Shimony | Doing Better Than UCT: Rational Monte Carlo Sampling in Trees | Withdrawn: "MCTS Based on Simple Regret" (arXiv:1207.5589) is the
final corrected version published in AAAI 2012 proceedings | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | UCT, a state-of-the art algorithm for Monte Carlo tree sampling (MCTS), is
based on UCB, a sampling policy for the Multi-armed Bandit Problem (MAB) that
minimizes the accumulated regret. However, MCTS differs from MAB in that only
the final choice, rather than all arm pulls, brings a reward, that is, the
simple regret, as opposite to the cumulative regret, must be minimized. This
ongoing work aims at applying meta-reasoning techniques to MCTS, which is
non-trivial. We begin by introducing policies for multi-armed bandits with
lower simple regret than UCB, and an algorithm for MCTS which combines
cumulative and simple regret minimization and outperforms UCT. We also develop
a sampling scheme loosely based on a myopic version of perfect value of
information. Finite-time and asymptotic analysis of the policies is provided,
and the algorithms are compared empirically.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2011 10:47:16 GMT"
},
{
"version": "v2",
"created": "Wed, 25 Jul 2012 03:40:29 GMT"
}
] | 1,343,260,800,000 | [
[
"Tolpin",
"David",
""
],
[
"Shimony",
"Solomon Eyal",
""
]
] |
1108.3757 | Patryk Filipiak | Patryk Filipiak | Self-Organizing Mixture Networks for Representation of Grayscale Digital
Images | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-Organizing Maps are commonly used for unsupervised learning purposes.
This paper is dedicated to the certain modification of SOM called SOMN
(Self-Organizing Mixture Networks) used as a mechanism for representing
grayscale digital images. Any grayscale digital image regarded as a
distribution function can be approximated by the corresponding Gaussian
mixture. In this paper, the use of SOMN is proposed in order to obtain such
approximations for input grayscale images in unsupervised manner.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2011 14:11:37 GMT"
}
] | 1,313,712,000,000 | [
[
"Filipiak",
"Patryk",
""
]
] |
1108.4279 | Jean-Louis Dessalles | Eric Bonabeau, Jean-Louis Dessalles (INFRES, LTCI) | Detection and emergence | jld-98072401 | Intellectica 25, 2 (1997) 85-94 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two different conceptions of emergence are reconciled as two instances of the
phenomenon of detection. In the process of comparing these two conceptions, we
find that the notions of complexity and detection allow us to form a unified
definition of emergence that clearly delineates the role of the observer.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2011 11:30:49 GMT"
}
] | 1,314,057,600,000 | [
[
"Bonabeau",
"Eric",
"",
"INFRES, LTCI"
],
[
"Dessalles",
"Jean-Louis",
"",
"INFRES, LTCI"
]
] |
1108.4804 | Wolfgang Dvo\v{r}\'ak | Wolfgang Dvo\v{r}\'ak, Michael Morak, Clemens Nopp, Stefan Woltran | dynPARTIX - A Dynamic Programming Reasoner for Abstract Argumentation | The paper appears in the Proceedings of the 19th International
Conference on Applications of Declarative Programming and Knowledge
Management (INAP 2011) | null | 10.1007/978-3-642-41524-1_14 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to announce the release of a novel system for
abstract argumentation which is based on decomposition and dynamic programming.
We provide first experimental evaluations to show the feasibility of this
approach.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2011 10:53:02 GMT"
}
] | 1,461,110,400,000 | [
[
"Dvořák",
"Wolfgang",
""
],
[
"Morak",
"Michael",
""
],
[
"Nopp",
"Clemens",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1108.4942 | Wolfgang Dvo\v{r}\'ak | Wolfgang Dvo\v{r}\'ak, Sarah Alice Gaggl, Johannes Wallner, Stefan
Woltran | Making Use of Advances in Answer-Set Programming for Abstract
Argumentation Systems | Paper appears in the Proceedings of the 19th International Conference
on Applications of Declarative Programming and Knowledge Management (INAP
2011) | null | 10.1007/978-3-642-41524-1_7 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dung's famous abstract argumentation frameworks represent the core formalism
for many problems and applications in the field of argumentation which
significantly evolved within the last decade. Recent work in the field has thus
focused on implementations for these frameworks, whereby one of the main
approaches is to use Answer-Set Programming (ASP). While some of the
argumentation semantics can be nicely expressed within the ASP language, others
required rather cumbersome encoding techniques. Recent advances in ASP systems,
in particular, the metasp optimization frontend for the ASP-package
gringo/claspD provides direct commands to filter answer sets satisfying certain
subset-minimality (or -maximality) constraints. This allows for much simpler
encodings compared to the ones in standard ASP language. In this paper, we
experimentally compare the original encodings (for the argumentation semantics
based on preferred, semi-stable, and respectively, stage extensions) with new
metasp encodings. Moreover, we provide novel encodings for the recently
introduced resolution-based grounded semantics. Our experimental results
indicate that the metasp approach works well in those cases where the
complexity of the encoded problem is adequately mirrored within the metasp
approach.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2011 20:19:09 GMT"
}
] | 1,461,110,400,000 | [
[
"Dvořák",
"Wolfgang",
""
],
[
"Gaggl",
"Sarah Alice",
""
],
[
"Wallner",
"Johannes",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1108.5002 | Yoshitaka Kameya | Yoshitaka Kameya, Satoru Nakamura, Tatsuya Iwasaki and Taisuke Sato | Verbal Characterization of Probabilistic Clusters using Minimal
Discriminative Propositions | 13 pages including 3 figures. This is the full version of a paper at
ICTAI-2011 (http://www.cse.fau.edu/ictai2011/) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a knowledge discovery process, interpretation and evaluation of the mined
results are indispensable in practice. In the case of data clustering, however,
it is often difficult to see in what aspect each cluster has been formed. This
paper proposes a method for automatic and objective characterization or
"verbalization" of the clusters obtained by mixture models, in which we collect
conjunctions of propositions (attribute-value pairs) that help us interpret or
evaluate the clusters. The proposed method provides us with a new, in-depth and
consistent tool for cluster interpretation/evaluation, and works for various
types of datasets including continuous attributes and missing values.
Experimental results with a couple of standard datasets exhibit the utility of
the proposed method, and the importance of the feedbacks from the
interpretation/evaluation step.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2011 03:41:26 GMT"
},
{
"version": "v2",
"created": "Wed, 31 Aug 2011 02:48:36 GMT"
}
] | 1,314,835,200,000 | [
[
"Kameya",
"Yoshitaka",
""
],
[
"Nakamura",
"Satoru",
""
],
[
"Iwasaki",
"Tatsuya",
""
],
[
"Sato",
"Taisuke",
""
]
] |
1108.5250 | Tshilidzi Marwala | A.K. Mohamed, T. Marwala, and L.R. John | Single-trial EEG Discrimination between Wrist and Finger Movement
Imagery and Execution in a Sensorimotor BCI | 33rd Annual International IEEE EMBS Conference 2011 | null | 10.1109/IEMBS.2011.6091552 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A brain-computer interface (BCI) may be used to control a prosthetic or
orthotic hand using neural activity from the brain. The core of this
sensorimotor BCI lies in the interpretation of the neural information extracted
from electroencephalogram (EEG). It is desired to improve on the interpretation
of EEG to allow people with neuromuscular disorders to perform daily
activities. This paper investigates the possibility of discriminating between
the EEG associated with wrist and finger movements. The EEG was recorded from
test subjects as they executed and imagined five essential hand movements using
both hands. Independent component analysis (ICA) and time-frequency techniques
were used to extract spectral features based on event-related
(de)synchronisation (ERD/ERS), while the Bhattacharyya distance (BD) was used
for feature reduction. Mahalanobis distance (MD) clustering and artificial
neural networks (ANN) were used as classifiers and obtained average accuracies
of 65 % and 71 % respectively. This shows that EEG discrimination between wrist
and finger movements is possible. The research introduces a new combination of
motor tasks to BCI research.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2011 07:10:04 GMT"
}
] | 1,479,340,800,000 | [
[
"Mohamed",
"A. K.",
""
],
[
"Marwala",
"T.",
""
],
[
"John",
"L. R.",
""
]
] |
1108.5586 | Petra Hofstedt | Denny Schneeweiss and Petra Hofstedt | FdConfig: A Constraint-Based Interactive Product Configurator | 19th International Conference on Applications of Declarative
Programming and Knowledge Management (INAP 2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a constraint-based approach to interactive product configuration.
Our configurator tool FdConfig is based on feature models for the
representation of the product domain. Such models can be directly mapped into
constraint satisfaction problems and dealt with by appropriate constraint
solvers. During the interactive configuration process the user generates new
constraints as a result of his configuration decisions and even may retract
constraints posted earlier. We discuss the configuration process, explain the
underlying techniques and show optimizations.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2011 14:55:47 GMT"
}
] | 1,426,723,200,000 | [
[
"Schneeweiss",
"Denny",
""
],
[
"Hofstedt",
"Petra",
""
]
] |
1108.5626 | Thomas Krennwallner | Thomas Eiter, Thomas Krennwallner, Christoph Redl | Nested HEX-Programs | Proceedings of the 19th International Conference on Applications of
Declarative Programming and Knowledge Management (INAP 2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer-Set Programming (ASP) is an established declarative programming
paradigm. However, classical ASP lacks subprogram calls as in procedural
programming, and access to external computations (like remote procedure calls)
in general. The feature is desired for increasing modularity and---assuming
proper access in place---(meta-)reasoning over subprogram results. While
HEX-programs extend classical ASP with external source access, they do not
support calls of (sub-)programs upfront. We present nested HEX-programs, which
extend HEX-programs to serve the desired feature, in a user-friendly manner.
Notably, the answer sets of called sub-programs can be individually accessed.
This is particularly useful for applications that need to reason over answer
sets like belief set merging, user-defined aggregate functions, or preferences
of answer sets.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2011 16:16:14 GMT"
}
] | 1,314,835,200,000 | [
[
"Eiter",
"Thomas",
""
],
[
"Krennwallner",
"Thomas",
""
],
[
"Redl",
"Christoph",
""
]
] |
1108.5717 | Lilyana Mihalkova | Lilyana Mihalkova and Walaa Eldin Moustafa | Structure Selection from Streaming Relational Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical relational learning techniques have been successfully applied in
a wide range of relational domains. In most of these applications, the human
designers capitalized on their background knowledge by following a
trial-and-error trajectory, where relational features are manually defined by a
human engineer, parameters are learned for those features on the training data,
the resulting model is validated, and the cycle repeats as the engineer adjusts
the set of features. This paper seeks to streamline application development in
large relational domains by introducing a light-weight approach that
efficiently evaluates relational features on pieces of the relational graph
that are streamed to it one at a time. We evaluate our approach on two social
media tasks and demonstrate that it leads to more accurate models that are
learned faster.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2011 19:19:17 GMT"
}
] | 1,314,662,400,000 | [
[
"Mihalkova",
"Lilyana",
""
],
[
"Moustafa",
"Walaa Eldin",
""
]
] |
1108.5794 | Christoph Beierle | Christoph Beierle, Gabriele Kern-Isberner, Karl S\"odler | A Constraint Logic Programming Approach for Computing Ordinal
Conditional Functions | To appear in the Proceedings of the 25th Workshop on Logic
Programming (WLP 2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to give appropriate semantics to qualitative conditionals of the
form "if A then normally B", ordinal conditional functions (OCFs) ranking the
possible worlds according to their degree of plausibility can be used. An OCF
accepting all conditionals of a knowledge base R can be characterized as the
solution of a constraint satisfaction problem. We present a high-level,
declarative approach using constraint logic programming techniques for solving
this constraint satisfaction problem. In particular, the approach developed
here supports the generation of all minimal solutions; these minimal solutions
are of special interest as they provide a basis for model-based inference from
R.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2011 01:41:34 GMT"
}
] | 1,314,748,800,000 | [
[
"Beierle",
"Christoph",
""
],
[
"Kern-Isberner",
"Gabriele",
""
],
[
"Södler",
"Karl",
""
]
] |
1108.5825 | Lena Wiese | Katsumi Inoue and Chiaki Sakama and Lena Wiese | Confidentiality-Preserving Data Publishing for Credulous Users by
Extended Abduction | Paper appears in the Proceedings of the 19th International Conference
on Applications of Declarative Programming and Knowledge Management (INAP
2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Publishing private data on external servers incurs the problem of how to
avoid unwanted disclosure of confidential data. We study a problem of
confidentiality in extended disjunctive logic programs and show how it can be
solved by extended abduction. In particular, we analyze how credulous
non-monotonic reasoning affects confidentiality.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2011 04:19:40 GMT"
}
] | 1,314,748,800,000 | [
[
"Inoue",
"Katsumi",
""
],
[
"Sakama",
"Chiaki",
""
],
[
"Wiese",
"Lena",
""
]
] |
1108.6007 | Markus Triska | Markus Triska | Domain-specific Languages in a Finite Domain Constraint Programming
System | Proceedings of the 19th International Conference on Applications of
Declarative Programming and Knowledge Management (INAP 2011) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present domain-specific languages (DSLs) that we devised
for their use in the implementation of a finite domain constraint programming
system, available as library(clpfd) in SWI-Prolog and YAP-Prolog. These DSLs
are used in propagator selection and constraint reification. In these areas,
they lead to concise specifications that are easy to read and reason about. At
compilation time, these specifications are translated to Prolog code, reducing
interpretative run-time overheads. The devised languages can be used in the
implementation of other finite domain constraint solvers as well and may
contribute to their correctness, conciseness and efficiency.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2011 16:43:17 GMT"
}
] | 1,314,748,800,000 | [
[
"Triska",
"Markus",
""
]
] |
1108.6208 | Norbert Manthey | Norbert Manthey | Coprocessor - a Standalone SAT Preprocessor | system description, short paper, WLP 2011 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work a stand-alone preprocessor for SAT is presented that is able to
perform most of the known preprocessing techniques. Preprocessing a formula in
SAT is important for performance since redundancy can be removed. The
preprocessor is part of the SAT solver riss and is called Coprocessor. Not only
riss, but also MiniSat 2.2 benefit from it, because the SatELite preprocessor
of MiniSat does not implement recent techniques. By using more advanced
techniques, Coprocessor is able to reduce the redundancy in a formula further
and improves the overall solving performance.
| [
{
"version": "v1",
"created": "Wed, 31 Aug 2011 12:38:21 GMT"
}
] | 1,314,835,200,000 | [
[
"Manthey",
"Norbert",
""
]
] |
1109.1231 | Luis Quesada | Hadrien Cambazard, Deepak Mehta, Barry O'Sullivan, Luis Quesada, Marco
Ruffini, David Payne, Linda Doyle | A Combinatorial Optimisation Approach to Designing Dual-Parented
Long-Reach Passive Optical Networks | University of Ulster, Intelligent System Research Centre, technical
report series. ISSN 2041-6407 | Proceedings of the 22nd Irish Conference on Artificial
Intelligence and Cognitive Science (AICS 2011), pp. 26-35, Derry, UK | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an application focused on the design of resilient long-reach
passive optical networks. We specifically consider dual-parented networks
whereby each customer must be connected to two metro sites via local exchange
sites. An important property of such a placement is resilience to single metro
node failure. The objective of the application is to determine the optimal
position of a set of metro nodes such that the total optical fibre length is
minimized. We prove that this problem is NP-Complete. We present two
alternative combinatorial optimisation approaches to finding an optimal metro
node placement using: a mixed integer linear programming (MIP) formulation of
the problem; and, a hybrid approach that uses clustering as a preprocessing
step. We consider a detailed case-study based on a network for Ireland. The
hybrid approach scales well and finds solutions that are close to optimal, with
a runtime that is two orders-of-magnitude better than the MIP model.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2011 17:06:23 GMT"
}
] | 1,315,353,600,000 | [
[
"Cambazard",
"Hadrien",
""
],
[
"Mehta",
"Deepak",
""
],
[
"O'Sullivan",
"Barry",
""
],
[
"Quesada",
"Luis",
""
],
[
"Ruffini",
"Marco",
""
],
[
"Payne",
"David",
""
],
[
"Doyle",
"Linda",
""
]
] |
1109.1314 | Tom Schaul | Tom Schaul, Julian Togelius, J\"urgen Schmidhuber | Measuring Intelligence through Games | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial general intelligence (AGI) refers to research aimed at tackling
the full problem of artificial intelligence, that is, create truly intelligent
agents. This sets it apart from most AI research which aims at solving
relatively narrow domains, such as character recognition, motion planning, or
increasing player satisfaction in games. But how do we know when an agent is
truly intelligent? A common point of reference in the AGI community is Legg and
Hutter's formal definition of universal intelligence, which has the appeal of
simplicity and generality but is unfortunately incomputable. Games of various
kinds are commonly used as benchmarks for "narrow" AI research, as they are
considered to have many important properties. We argue that many of these
properties carry over to the testing of general intelligence as well. We then
sketch how such testing could practically be carried out. The central part of
this sketch is an extension of universal intelligence to deal with finite time,
and the use of sampling of the space of games expressed in a suitably biased
game description language.
| [
{
"version": "v1",
"created": "Tue, 6 Sep 2011 22:13:30 GMT"
}
] | 1,315,440,000,000 | [
[
"Schaul",
"Tom",
""
],
[
"Togelius",
"Julian",
""
],
[
"Schmidhuber",
"Jürgen",
""
]
] |
1109.1498 | E. Di Sciascio | E. Di Sciascio, F. M. Donini, M. Mongiello | Structured Knowledge Representation for Image Retrieval | null | Journal Of Artificial Intelligence Research, Volume 16, pages
209-257, 2002 | 10.1613/jair.902 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a structured approach to the problem of retrieval of images by
content and present a description logic that has been devised for the semantic
indexing and retrieval of images containing complex objects. As other
approaches do, we start from low-level features extracted with image analysis
to detect and characterize regions in an image. However, in contrast with
feature-based approaches, we provide a syntax to describe segmented regions as
basic objects and complex objects as compositions of basic ones. Then we
introduce a companion extensional semantics for defining reasoning services,
such as retrieval, classification, and subsumption. These services can be used
for both exact and approximate matching, using similarity measures. Using our
logical approach as a formal specification, we implemented a complete
client-server image retrieval system, which allows a user to pose both queries
by sketch and queries by example. A set of experiments has been carried out on
a testbed of images to assess the retrieval capabilities of the system in
comparison with expert users ranking. Results are presented adopting a
well-established measure of quality borrowed from textual information
retrieval.
| [
{
"version": "v1",
"created": "Thu, 30 Jun 2011 17:45:48 GMT"
}
] | 1,315,440,000,000 | [
[
"Di Sciascio",
"E.",
""
],
[
"Donini",
"F. M.",
""
],
[
"Mongiello",
"M.",
""
]
] |
1109.1922 | Markus Wagner | Katya Vladislavleva, Tobias Friedrich, Frank Neumann, Markus Wagner | Predicting the Energy Output of Wind Farms Based on Weather Data:
Important Variables and their Correlation | 13 pages, 11 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wind energy plays an increasing role in the supply of energy world-wide. The
energy output of a wind farm is highly dependent on the weather condition
present at the wind farm. If the output can be predicted more accurately,
energy suppliers can coordinate the collaborative production of different
energy sources more efficiently to avoid costly overproductions.
With this paper, we take a computer science perspective on energy prediction
based on weather data and analyze the important parameters as well as their
correlation on the energy output. To deal with the interaction of the different
parameters we use symbolic regression based on the genetic programming tool
DataModeler.
Our studies are carried out on publicly available weather and energy data for
a wind farm in Australia. We reveal the correlation of the different variables
for the energy output. The model obtained for energy prediction gives a very
reliable prediction of the energy output for newly given weather data.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 07:38:59 GMT"
}
] | 1,315,785,600,000 | [
[
"Vladislavleva",
"Katya",
""
],
[
"Friedrich",
"Tobias",
""
],
[
"Neumann",
"Frank",
""
],
[
"Wagner",
"Markus",
""
]
] |
1109.1966 | Timothy Hunter | Timothy Hunter, Pieter Abbeel, and Alexandre Bayen | The path inference filter: model-based low-latency map matching of probe
vehicle data | Preprint, 23 pages and 23 figures | null | 10.1016/j.trb.2013.03.008 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of reconstructing vehicle trajectories from sparse
sequences of GPS points, for which the sampling interval is between 10 seconds
and 2 minutes. We introduce a new class of algorithms, called altogether path
inference filter (PIF), that maps GPS data in real time, for a variety of
trade-offs and scenarios, and with a high throughput. Numerous prior approaches
in map-matching can be shown to be special cases of the path inference filter
presented in this article. We present an efficient procedure for automatically
training the filter on new data, with or without ground truth observations. The
framework is evaluated on a large San Francisco taxi dataset and is shown to
improve upon the current state of the art. This filter also provides insights
about driving patterns of drivers. The path inference filter has been deployed
at an industrial scale inside the Mobile Millennium traffic information system,
and is used to map fleets of data in San Francisco, Sacramento, Stockholm and
Porto.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 11:12:35 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jun 2012 17:12:40 GMT"
}
] | 1,426,723,200,000 | [
[
"Hunter",
"Timothy",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Bayen",
"Alexandre",
""
]
] |
1109.2048 | G. Barish | G. Barish, C. A. Knoblock | An Expressive Language and Efficient Execution System for Software
Agents | null | Journal Of Artificial Intelligence Research, Volume 23, pages
625-666, 2005 | 10.1613/jair.1548 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Software agents can be used to automate many of the tedious, time-consuming
information processing tasks that humans currently have to complete manually.
However, to do so, agent plans must be capable of representing the myriad of
actions and control flows required to perform those tasks. In addition, since
these tasks can require integrating multiple sources of remote information ?
typically, a slow, I/O-bound process ? it is desirable to make execution as
efficient as possible. To address both of these needs, we present a flexible
software agent plan language and a highly parallel execution system that enable
the efficient execution of expressive agent plans. The plan language allows
complex tasks to be more easily expressed by providing a variety of operators
for flexibly processing the data as well as supporting subplans (for
modularity) and recursion (for indeterminate looping). The executor is based on
a streaming dataflow model of execution to maximize the amount of operator and
data parallelism possible at runtime. We have implemented both the language and
executor in a system called THESEUS. Our results from testing THESEUS show that
streaming dataflow execution can yield significant speedups over both
traditional serial (von Neumann) as well as non-streaming dataflow-style
execution that existing software and robot agent execution systems currently
support. In addition, we show how plans written in the language we present can
represent certain types of subtasks that cannot be accomplished using the
languages supported by network query engines. Finally, we demonstrate that the
increased expressivity of our plan language does not hamper performance;
specifically, we show how data can be integrated from multiple remote sources
just as efficiently using our architecture as is possible with a
state-of-the-art streaming-dataflow network query engine.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 15:57:43 GMT"
}
] | 1,315,785,600,000 | [
[
"Barish",
"G.",
""
],
[
"Knoblock",
"C. A.",
""
]
] |
1109.2049 | Matti J\"arvisalo | Anton Belov and Matti J\"arvisalo | Structure-Based Local Search Heuristics for Circuit-Level Boolean
Satisfiability | 15 pages | Presented at 8th International Workshop on Local Search Techniques
in Constraint Satisfaction (LSCS 2011) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work focuses on improving state-of-the-art in stochastic local search
(SLS) for solving Boolean satisfiability (SAT) instances arising from
real-world industrial SAT application domains. The recently introduced SLS
method CRSat has been shown to noticeably improve on previously suggested SLS
techniques in solving such real-world instances by combining
justification-based local search with limited Boolean constraint propagation on
the non-clausal formula representation form of Boolean circuits. In this work,
we study possibilities of further improving the performance of CRSat by
exploiting circuit-level structural knowledge for developing new search
heuristics for CRSat. To this end, we introduce and experimentally evaluate a
variety of search heuristics, many of which are motivated by circuit-level
heuristics originally developed in completely different contexts, e.g., for
electronic design automation applications. To the best of our knowledge, most
of the heuristics are novel in the context of SLS for SAT and, more generally,
SLS for constraint satisfaction problems.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 15:58:36 GMT"
}
] | 1,315,785,600,000 | [
[
"Belov",
"Anton",
""
],
[
"Järvisalo",
"Matti",
""
]
] |
1109.2127 | V. Bayer-Zubek | V. Bayer-Zubek, T. G. Dietterich | Integrating Learning from Examples into the Search for Diagnostic
Policies | null | Journal Of Artificial Intelligence Research, Volume 24, pages
263-303, 2005 | 10.1613/jair.1512 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper studies the problem of learning diagnostic policies from training
examples. A diagnostic policy is a complete description of the decision-making
actions of a diagnostician (i.e., tests followed by a diagnostic decision) for
all possible combinations of test results. An optimal diagnostic policy is one
that minimizes the expected total cost, which is the sum of measurement costs
and misdiagnosis costs. In most diagnostic settings, there is a tradeoff
between these two kinds of costs. This paper formalizes diagnostic decision
making as a Markov Decision Process (MDP). The paper introduces a new family of
systematic search algorithms based on the AO* algorithm to solve this MDP. To
make AO* efficient, the paper describes an admissible heuristic that enables
AO* to prune large parts of the search space. The paper also introduces several
greedy algorithms including some improvements over previously-published
methods. The paper then addresses the question of learning diagnostic policies
from examples. When the probabilities of diseases and test results are computed
from training data, there is a great danger of overfitting. To reduce
overfitting, regularizers are integrated into the search algorithms. Finally,
the paper compares the proposed methods on five benchmark diagnostic data sets.
The studies show that in most cases the systematic search methods produce
better diagnostic policies than the greedy methods. In addition, the studies
show that for training sets of realistic size, the systematic search algorithms
are practical on todays desktop computers.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:20:12 GMT"
}
] | 1,315,872,000,000 | [
[
"Bayer-Zubek",
"V.",
""
],
[
"Dietterich",
"T. G.",
""
]
] |
1109.2131 | J. Larrosa | J. Larrosa, E. Morancho, D. Niso | On the Practical use of Variable Elimination in Constraint Optimization
Problems: 'Still-life' as a Case Study | null | Journal Of Artificial Intelligence Research, Volume 23, pages
421-440, 2005 | 10.1613/jair.1541 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variable elimination is a general technique for constraint processing. It is
often discarded because of its high space complexity. However, it can be
extremely useful when combined with other techniques. In this paper we study
the applicability of variable elimination to the challenging problem of finding
still-lifes. We illustrate several alternatives: variable elimination as a
stand-alone algorithm, interleaved with search, and as a source of good quality
lower bounds. We show that these techniques are the best known option both
theoretically and empirically. In our experiments we have been able to solve
the n=20 instance, which is far beyond reach with alternative approaches.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:23:06 GMT"
}
] | 1,315,872,000,000 | [
[
"Larrosa",
"J.",
""
],
[
"Morancho",
"E.",
""
],
[
"Niso",
"D.",
""
]
] |
1109.2134 | H. E. Dixon | H. E. Dixon, M. L. Ginsberg, E. M. Luks, A. J. Parkes | Generalizing Boolean Satisfiability II: Theory | null | Journal Of Artificial Intelligence Research, Volume 22, pages
481-534, 2004 | 10.1613/jair.1555 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the second of three planned papers describing ZAP, a satisfiability
engine that substantially generalizes existing tools while retaining the
performance characteristics of modern high performance solvers. The fundamental
idea underlying ZAP is that many problems passed to such engines contain rich
internal structure that is obscured by the Boolean representation used; our
goal is to define a representation in which this structure is apparent and can
easily be exploited to improve computational performance. This paper presents
the theoretical basis for the ideas underlying ZAP, arguing that existing ideas
in this area exploit a single, recurring structure in that multiple database
axioms can be obtained by operating on a single axiom using a subgroup of the
group of permutations on the literals in the problem. We argue that the group
structure precisely captures the general structure at which earlier approaches
hinted, and give numerous examples of its use. We go on to extend the
Davis-Putnam-Logemann-Loveland inference procedure to this broader setting, and
show that earlier computational improvements are either subsumed or left intact
by the new method. The third paper in this series discusses ZAPs implementation
and presents experimental performance results.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:23:53 GMT"
}
] | 1,315,872,000,000 | [
[
"Dixon",
"H. E.",
""
],
[
"Ginsberg",
"M. L.",
""
],
[
"Luks",
"E. M.",
""
],
[
"Parkes",
"A. J.",
""
]
] |
1109.2137 | P. Domingos | P. Domingos, S. Sanghai, D. Weld | Relational Dynamic Bayesian Networks | null | Journal Of Artificial Intelligence Research, Volume 24, pages
759-797, 2005 | 10.1613/jair.1625 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic processes that involve the creation of objects and relations over
time are widespread, but relatively poorly studied. For example, accurate fault
diagnosis in factory assembly processes requires inferring the probabilities of
erroneous assembly operations, but doing this efficiently and accurately is
difficult. Modeled as dynamic Bayesian networks, these processes have discrete
variables with very large domains and extremely high dimensionality. In this
paper, we introduce relational dynamic Bayesian networks (RDBNs), which are an
extension of dynamic Bayesian networks (DBNs) to first-order logic. RDBNs are a
generalization of dynamic probabilistic relational models (DPRMs), which we had
proposed in our previous work to model dynamic uncertain domains. We first
extend the Rao-Blackwellised particle filtering described in our earlier work
to RDBNs. Next, we lift the assumptions associated with Rao-Blackwellization in
RDBNs and propose two new forms of particle filtering. The first one uses
abstraction hierarchies over the predicates to smooth the particle filters
estimates. The second employs kernel density estimation with a kernel function
specifically designed for relational domains. Experiments show these two
methods greatly outperform standard particle filtering on the task of assembly
plan execution monitoring.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:29:06 GMT"
}
] | 1,315,872,000,000 | [
[
"Domingos",
"P.",
""
],
[
"Sanghai",
"S.",
""
],
[
"Weld",
"D.",
""
]
] |
1109.2138 | N. Y. Foo | N. Y. Foo, Q. B. Vo | Reasoning about Action: An Argumentation - Theoretic Approach | null | Journal Of Artificial Intelligence Research, Volume 24, pages
465-518, 2005 | 10.1613/jair.1602 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a uniform non-monotonic solution to the problems of reasoning
about action on the basis of an argumentation-theoretic approach. Our theory is
provably correct relative to a sensible minimisation policy introduced on top
of a temporal propositional logic. Sophisticated problem domains can be
formalised in our framework. As much attention of researchers in the field has
been paid to the traditional and basic problems in reasoning about actions such
as the frame, the qualification and the ramification problems, approaches to
these problems within our formalisation lie at heart of the expositions
presented in this paper.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:29:24 GMT"
}
] | 1,315,872,000,000 | [
[
"Foo",
"N. Y.",
""
],
[
"Vo",
"Q. B.",
""
]
] |
1109.2139 | P. J. Hawkins | P. J. Hawkins, V. Lagoon, P. J. Stuckey | Solving Set Constraint Satisfaction Problems using ROBDDs | null | Journal Of Artificial Intelligence Research, Volume 24, pages
109-156, 2005 | 10.1613/jair.1638 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new approach to modeling finite set domain
constraint problems using Reduced Ordered Binary Decision Diagrams (ROBDDs). We
show that it is possible to construct an efficient set domain propagator which
compactly represents many set domains and set constraints using ROBDDs. We
demonstrate that the ROBDD-based approach provides unprecedented flexibility in
modeling constraint satisfaction problems, leading to performance improvements.
We also show that the ROBDD-based modeling approach can be extended to the
modeling of integer and multiset constraint problems in a straightforward
manner. Since domain propagation is not always practical, we also show how to
incorporate less strict consistency notions into the ROBDD framework, such as
set bounds, cardinality bounds and lexicographic bounds consistency. Finally,
we present experimental results that demonstrate the ROBDD-based solver
performs better than various more conventional constraint solvers on several
standard set constraint problems.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:30:13 GMT"
}
] | 1,315,872,000,000 | [
[
"Hawkins",
"P. J.",
""
],
[
"Lagoon",
"V.",
""
],
[
"Stuckey",
"P. J.",
""
]
] |
1109.2140 | P. Cimiano | P. Cimiano, A. Hotho, S. Staab | Learning Concept Hierarchies from Text Corpora using Formal Concept
Analysis | null | Journal Of Artificial Intelligence Research, Volume 24, pages
305-339, 2005 | 10.1613/jair.1648 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach to the automatic acquisition of taxonomies or
concept hierarchies from a text corpus. The approach is based on Formal Concept
Analysis (FCA), a method mainly used for the analysis of data, i.e. for
investigating and processing explicitly given information. We follow Harris
distributional hypothesis and model the context of a certain term as a vector
representing syntactic dependencies which are automatically acquired from the
text corpus with a linguistic parser. On the basis of this context information,
FCA produces a lattice that we convert into a special kind of partial order
constituting a concept hierarchy. The approach is evaluated by comparing the
resulting concept hierarchies with hand-crafted taxonomies for two domains:
tourism and finance. We also directly compare our approach with hierarchical
agglomerative clustering as well as with Bi-Section-KMeans as an instance of a
divisive clustering algorithm. Furthermore, we investigate the impact of using
different measures weighting the contribution of each attribute as well as of
applying a particular smoothing technique to cope with data sparseness.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:30:44 GMT"
}
] | 1,315,872,000,000 | [
[
"Cimiano",
"P.",
""
],
[
"Hotho",
"A.",
""
],
[
"Staab",
"S.",
""
]
] |
1109.2142 | H. E. Dixon | H. E. Dixon, M. L. Ginsberg, D. Hofer, E. M. Luks, A. J. Parkes | Generalizing Boolean Satisfiability III: Implementation | null | Journal Of Artificial Intelligence Research, Volume 23, pages
441-531, 2005 | 10.1613/jair.1656 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is the third of three papers describing ZAP, a satisfiability engine
that substantially generalizes existing tools while retaining the performance
characteristics of modern high-performance solvers. The fundamental idea
underlying ZAP is that many problems passed to such engines contain rich
internal structure that is obscured by the Boolean representation used; our
goal has been to define a representation in which this structure is apparent
and can be exploited to improve computational performance. The first paper
surveyed existing work that (knowingly or not) exploited problem structure to
improve the performance of satisfiability engines, and the second paper showed
that this structure could be understood in terms of groups of permutations
acting on individual clauses in any particular Boolean theory. We conclude the
series by discussing the techniques needed to implement our ideas, and by
reporting on their performance on a variety of problem instances.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:31:25 GMT"
}
] | 1,315,872,000,000 | [
[
"Dixon",
"H. E.",
""
],
[
"Ginsberg",
"M. L.",
""
],
[
"Hofer",
"D.",
""
],
[
"Luks",
"E. M.",
""
],
[
"Parkes",
"A. J.",
""
]
] |
1109.2143 | M. Jaeger | M. Jaeger | Ignorability in Statistical and Probabilistic Inference | null | Journal Of Artificial Intelligence Research, Volume 24, pages
889-917, 2005 | 10.1613/jair.1657 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When dealing with incomplete data in statistical learning, or incomplete
observations in probabilistic inference, one needs to distinguish the fact that
a certain event is observed from the fact that the observed event has happened.
Since the modeling and computational complexities entailed by maintaining this
proper distinction are often prohibitive, one asks for conditions under which
it can be safely ignored. Such conditions are given by the missing at random
(mar) and coarsened at random (car) assumptions. In this paper we provide an
in-depth analysis of several questions relating to mar/car assumptions. Main
purpose of our study is to provide criteria by which one may evaluate whether a
car assumption is reasonable for a particular data collecting or observational
process. This question is complicated by the fact that several distinct
versions of mar/car assumptions exist. We therefore first provide an overview
over these different versions, in which we highlight the distinction between
distributional and coarsening variable induced versions. We show that
distributional versions are less restrictive and sufficient for most
applications. We then address from two different perspectives the question of
when the mar/car assumption is warranted. First we provide a static analysis
that characterizes the admissibility of the car assumption in terms of the
support structure of the joint probability distribution of complete data and
incomplete observations. Here we obtain an equivalence characterization that
improves and extends a recent result by Grunwald and Halpern. We then turn to a
procedural analysis that characterizes the admissibility of the car assumption
in terms of procedural models for the actual data (or observation) generating
process. The main result of this analysis is that the stronger coarsened
completely at random (ccar) condition is arguably the most reasonable
assumption, as it alone corresponds to data coarsening procedures that satisfy
a natural robustness property.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:31:47 GMT"
}
] | 1,315,872,000,000 | [
[
"Jaeger",
"M.",
""
]
] |
1109.2145 | M. T.J. Spaan | M. T.J. Spaan, N. Vlassis | Perseus: Randomized Point-based Value Iteration for POMDPs | null | Journal Of Artificial Intelligence Research, Volume 24, pages
195-220, 2005 | 10.1613/jair.1659 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Partially observable Markov decision processes (POMDPs) form an attractive
and principled framework for agent planning under uncertainty. Point-based
approximate techniques for POMDPs compute a policy based on a finite set of
points collected in advance from the agents belief space. We present a
randomized point-based value iteration algorithm called Perseus. The algorithm
performs approximate value backup stages, ensuring that in each backup stage
the value of each point in the belief set is improved; the key observation is
that a single backup may improve the value of many belief points. Contrary to
other point-based methods, Perseus backs up only a (randomly selected) subset
of points in the belief set, sufficient for improving the value of each belief
point in the set. We show how the same idea can be extended to dealing with
continuous action spaces. Experimental results show the potential of Perseus in
large scale POMDP problems.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:32:03 GMT"
}
] | 1,315,872,000,000 | [
[
"Spaan",
"M. T. J.",
""
],
[
"Vlassis",
"N.",
""
]
] |
1109.2148 | L. De Raedt | L. De Raedt, K. Kersting, T. Raiko | Logical Hidden Markov Models | null | Journal Of Artificial Intelligence Research, Volume 25, pages
425-456, 2006 | 10.1613/jair.1675 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logical hidden Markov models (LOHMMs) upgrade traditional hidden Markov
models to deal with sequences of structured symbols in the form of logical
atoms, rather than flat characters.
This note formally introduces LOHMMs and presents solutions to the three
central inference problems for LOHMMs: evaluation, most likely hidden state
sequence and parameter estimation. The resulting representation and algorithms
are experimentally evaluated on problems from the domain of bioinformatics.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:33:14 GMT"
}
] | 1,315,872,000,000 | [
[
"De Raedt",
"L.",
""
],
[
"Kersting",
"K.",
""
],
[
"Raiko",
"T.",
""
]
] |
1109.2153 | B. Bonet | B. Bonet, H. Geffner | mGPT: A Probabilistic Planner Based on Heuristic Search | null | Journal Of Artificial Intelligence Research, Volume 24, pages
933-944, 2005 | 10.1613/jair.1688 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe the version of the GPT planner used in the probabilistic track of
the 4th International Planning Competition (IPC-4). This version, called mGPT,
solves Markov Decision Processes specified in the PPDDL language by extracting
and using different classes of lower bounds along with various heuristic-search
algorithms. The lower bounds are extracted from deterministic relaxations where
the alternative probabilistic effects of an action are mapped into different,
independent, deterministic actions. The heuristic-search algorithms use these
lower bounds for focusing the updates and delivering a consistent value
function over all states reachable from the initial state and the greedy
policy.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:42:50 GMT"
}
] | 1,315,872,000,000 | [
[
"Bonet",
"B.",
""
],
[
"Geffner",
"H.",
""
]
] |
1109.2154 | A. Botea | A. Botea, M. Enzenberger, M. Mueller, J. Schaeffer | Macro-FF: Improving AI Planning with Automatically Learned
Macro-Operators | null | Journal Of Artificial Intelligence Research, Volume 24, pages
581-621, 2005 | 10.1613/jair.1696 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent progress in AI planning, many benchmarks remain challenging
for current planners. In many domains, the performance of a planner can greatly
be improved by discovering and exploiting information about the domain
structure that is not explicitly encoded in the initial PDDL formulation. In
this paper we present and compare two automated methods that learn relevant
information from previous experience in a domain and use it to solve new
problem instances. Our methods share a common four-step strategy. First, a
domain is analyzed and structural information is extracted, then
macro-operators are generated based on the previously discovered structure. A
filtering and ranking procedure selects the most useful macro-operators.
Finally, the selected macros are used to speed up future searches. We have
successfully used such an approach in the fourth international planning
competition IPC-4. Our system, Macro-FF, extends Hoffmanns state-of-the-art
planner FF 2.3 with support for two kinds of macro-operators, and with
engineering enhancements. We demonstrate the effectiveness of our ideas on
benchmarks from international planning competitions. Our results indicate a
large reduction in search effort in those complex domains where structural
information can be inferred.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:43:12 GMT"
}
] | 1,315,872,000,000 | [
[
"Botea",
"A.",
""
],
[
"Enzenberger",
"M.",
""
],
[
"Mueller",
"M.",
""
],
[
"Schaeffer",
"J.",
""
]
] |
1109.2155 | S. Kambhampati | S. Kambhampati, M.H.L. van den Briel | Optiplan: Unifying IP-based and Graph-based Planning | null | Journal Of Artificial Intelligence Research, Volume 24, pages
919-931, 2005 | 10.1613/jair.1698 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Optiplan planning system is the first integer programming-based planner
that successfully participated in the international planning competition. This
engineering note describes the architecture of Optiplan and provides the
integer programming formulation that enabled it to perform reasonably well in
the competition. We also touch upon some recent developments that make integer
programming encodings significantly more competitive.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:43:37 GMT"
}
] | 1,315,872,000,000 | [
[
"Kambhampati",
"S.",
""
],
[
"Briel",
"M. H. L. van den",
""
]
] |
1109.2156 | A. Fern | A. Fern, R. Givan, S. Yoon | Approximate Policy Iteration with a Policy Language Bias: Solving
Relational Markov Decision Processes | null | Journal Of Artificial Intelligence Research, Volume 25, pages
75-118, 2006 | 10.1613/jair.1700 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study an approach to policy selection for large relational Markov Decision
Processes (MDPs). We consider a variant of approximate policy iteration (API)
that replaces the usual value-function learning step with a learning step in
policy space. This is advantageous in domains where good policies are easier to
represent and learn than the corresponding value functions, which is often the
case for the relational MDPs we are interested in. In order to apply API to
such problems, we introduce a relational policy language and corresponding
learner. In addition, we introduce a new bootstrapping routine for goal-based
planning domains, based on random walks. Such bootstrapping is necessary for
many large relational MDPs, where reward is extremely sparse, as API is
ineffective in such domains when initialized with an uninformed policy. Our
experiments show that the resulting system is able to find good policies for a
number of classical planning domains and their stochastic variants by solving
them as extremely large relational MDPs. The experiments also point to some
limitations of our approach, suggesting future work.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2011 20:43:53 GMT"
}
] | 1,315,872,000,000 | [
[
"Fern",
"A.",
""
],
[
"Givan",
"R.",
""
],
[
"Yoon",
"S.",
""
]
] |
1109.2346 | A. E. Howe | A. E. Howe, J. P. Watson, L. D. Whitley | Linking Search Space Structure, Run-Time Dynamics, and Problem
Difficulty: A Step Toward Demystifying Tabu Search | null | Journal Of Artificial Intelligence Research, Volume 24, pages
221-261, 2005 | 10.1613/jair.1576 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tabu search is one of the most effective heuristics for locating high-quality
solutions to a diverse array of NP-hard combinatorial optimization problems.
Despite the widespread success of tabu search, researchers have a poor
understanding of many key theoretical aspects of this algorithm, including
models of the high-level run-time dynamics and identification of those search
space features that influence problem difficulty. We consider these questions
in the context of the job-shop scheduling problem (JSP), a domain where tabu
search algorithms have been shown to be remarkably effective. Previously, we
demonstrated that the mean distance between random local optima and the nearest
optimal solution is highly correlated with problem difficulty for a well-known
tabu search algorithm for the JSP introduced by Taillard. In this paper, we
discuss various shortcomings of this measure and develop a new model of problem
difficulty that corrects these deficiencies. We show that Taillards algorithm
can be modeled with high fidelity as a simple variant of a straightforward
random walk. The random walk model accounts for nearly all of the variability
in the cost required to locate both optimal and sub-optimal solutions to random
JSPs, and provides an explanation for differences in the difficulty of random
versus structured JSPs. Finally, we discuss and empirically substantiate two
novel predictions regarding tabu search algorithm behavior. First, the method
for constructing the initial solution is highly unlikely to impact the
performance of tabu search. Second, tabu tenure should be selected to be as
small as possible while simultaneously avoiding search stagnation; values
larger than necessary lead to significant degradations in performance.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2011 20:09:12 GMT"
}
] | 1,315,872,000,000 | [
[
"Howe",
"A. E.",
""
],
[
"Watson",
"J. P.",
""
],
[
"Whitley",
"L. D.",
""
]
] |
1109.2347 | F. A. Aloul | F. A. Aloul, I. L. Markov, A. Ramani, K. A. Sakallah | Breaking Instance-Independent Symmetries In Exact Graph Coloring | null | Journal Of Artificial Intelligence Research, Volume 26, pages
289-322, 2006 | 10.1613/jair.1637 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Code optimization and high level synthesis can be posed as constraint
satisfaction and optimization problems, such as graph coloring used in register
allocation. Graph coloring is also used to model more traditional CSPs relevant
to AI, such as planning, time-tabling and scheduling. Provably optimal
solutions may be desirable for commercial and defense applications.
Additionally, for applications such as register allocation and code
optimization, naturally-occurring instances of graph coloring are often small
and can be solved optimally. A recent wave of improvements in algorithms for
Boolean satisfiability (SAT) and 0-1 Integer Linear Programming (ILP) suggests
generic problem-reduction methods, rather than problem-specific heuristics,
because (1) heuristics may be upset by new constraints, (2) heuristics tend to
ignore structure, and (3) many relevant problems are provably inapproximable.
Problem reductions often lead to highly symmetric SAT instances, and
symmetries are known to slow down SAT solvers. In this work, we compare several
avenues for symmetry breaking, in particular when certain kinds of symmetry are
present in all generated instances. Our focus on reducing CSPs to SAT allows us
to leverage recent dramatic improvement in SAT solvers and automatically
benefit from future progress. We can use a variety of black-box SAT solvers
without modifying their source code because our symmetry-breaking techniques
are static, i.e., we detect symmetries and add symmetry breaking predicates
(SBPs) during pre-processing.
An important result of our work is that among the types of
instance-independent SBPs we studied and their combinations, the simplest and
least complete constructions are the most effective. Our experiments also
clearly indicate that instance-independent symmetries should mostly be
processed together with instance-specific symmetries rather than at the
specification level, contrary to what has been suggested in the literature.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2011 20:09:48 GMT"
}
] | 1,315,872,000,000 | [
[
"Aloul",
"F. A.",
""
],
[
"Markov",
"I. L.",
""
],
[
"Ramani",
"A.",
""
],
[
"Sakallah",
"K. A.",
""
]
] |
1109.2355 | C. Gretton | C. Gretton, F. Kabanza, D. Price, J. Slaney, S. Thiebaux | Decision-Theoretic Planning with non-Markovian Rewards | null | Journal Of Artificial Intelligence Research, Volume 25, pages
17-74, 2006 | 10.1613/jair.1676 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A decision process in which rewards depend on history rather than merely on
the current state is called a decision process with non-Markovian rewards
(NMRDP). In decision-theoretic planning, where many desirable behaviours are
more naturally expressed as properties of execution sequences rather than as
properties of states, NMRDPs form a more natural model than the commonly
adopted fully Markovian decision process (MDP) model. While the more tractable
solution methods developed for MDPs do not directly apply in the presence of
non-Markovian rewards, a number of solution methods for NMRDPs have been
proposed in the literature. These all exploit a compact specification of the
non-Markovian reward function in temporal logic, to automatically translate the
NMRDP into an equivalent MDP which is solved using efficient MDP solution
methods. This paper presents NMRDPP (Non-Markovian Reward Decision Process
Planner), a software platform for the development and experimentation of
methods for decision-theoretic planning with non-Markovian rewards. The current
version of NMRDPP implements, under a single interface, a family of methods
based on existing as well as new approaches which we describe in detail. These
include dynamic programming, heuristic search, and structured methods. Using
NMRDPP, we compare the methods and identify certain problem features that
affect their performance. NMRDPPs treatment of non-Markovian rewards is
inspired by the treatment of domain-specific search control knowledge in the
TLPlan planner, which it incorporates as a special case. In the First
International Probabilistic Planning Competition, NMRDPP was able to compete
and perform well in both the domain-independent and hand-coded tracks, using
search control knowledge in the latter.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2011 21:39:21 GMT"
}
] | 1,315,872,000,000 | [
[
"Gretton",
"C.",
""
],
[
"Kabanza",
"F.",
""
],
[
"Price",
"D.",
""
],
[
"Slaney",
"J.",
""
],
[
"Thiebaux",
"S.",
""
]
] |
1109.2752 | Ant\'onio Jos\'e dos Reis Morgado | Antonio Morgado and Joao Marques-Silva | On Validating Boolean Optimizers | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean optimization finds a wide range of application domains, that
motivated a number of different organizations of Boolean optimizers since the
mid 90s. Some of the most successful approaches are based on iterative calls to
an NP oracle, using either linear search, binary search or the identification
of unsatisfiable sub-formulas. The increasing use of Boolean optimizers in
practical settings raises the question of confidence in computed results. For
example, the issue of confidence is paramount in safety critical settings. One
way of increasing the confidence of the results computed by Boolean optimizers
is to develop techniques for validating the results. Recent work studied the
validation of Boolean optimizers based on branch-and-bound search. This paper
complements existing work, and develops methods for validating Boolean
optimizers that are based on iterative calls to an NP oracle. This entails
implementing solutions for validating both satisfiable and unsatisfiable
answers from the NP oracle. The work described in this paper can be applied to
a wide range of Boolean optimizers, that find application in Pseudo-Boolean
Optimization and in Maximum Satisfiability. Preliminary experimental results
indicate that the impact of the proposed method in overall performance is
negligible.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2011 11:48:32 GMT"
}
] | 1,315,958,400,000 | [
[
"Morgado",
"Antonio",
""
],
[
"Marques-Silva",
"Joao",
""
]
] |
1109.3094 | Martin Josef Geiger | Martin Josef Geiger and Marc Sevaux | On the use of reference points for the biobjective Inventory Routing
Problem | null | Proceedings of the 9th Metaheuristics International Conference MIC
2011, July 25-28, 2011, Udine, Italy, Pages 141-149 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article presents a study on the biobjective inventory routing problem.
Contrary to most previous research, the problem is treated as a true
multi-objective optimization problem, with the goal of identifying
Pareto-optimal solutions. Due to the hardness of the problem at hand, a
reference point based optimization approach is presented and implemented into
an optimization and decision support system, which allows for the computation
of a true subset of the optimal outcomes. Experimental investigation involving
local search metaheuristics are conducted on benchmark data, and numerical
results are reported and analyzed.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2011 14:36:41 GMT"
}
] | 1,316,044,800,000 | [
[
"Geiger",
"Martin Josef",
""
],
[
"Sevaux",
"Marc",
""
]
] |
1109.3313 | Martin Josef Geiger | Martin Josef Geiger, Marc Sevaux, Stefan Voss | Neigborhood Selection in Variable Neighborhood Search | ISBN 978-88-900984-3-7 | Proceedings of the 9th Metaheuristics International Conference MIC
2011, July 25-28, 2011, Udine, Italy, Pages 571-573 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variable neighborhood search (VNS) is a metaheuristic for solving
optimization problems based on a simple principle: systematic changes of
neighborhoods within the search, both in the descent to local minima and in the
escape from the valleys which contain them. Designing these neighborhoods and
applying them in a meaningful fashion is not an easy task. Moreover, an
appropriate order in which they are applied must be determined. In this paper
we attempt to investigate this issue. Assume that we are given an optimization
problem that is intended to be solved by applying the VNS scheme, how many and
which types of neighborhoods should be investigated and what could be
appropriate selection criteria to apply these neighborhoods. More specifically,
does it pay to "look ahead" (see, e.g., in the context of VNS and GRASP) when
attempting to switch from one neighborhood to another?
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2011 10:53:32 GMT"
}
] | 1,316,131,200,000 | [
[
"Geiger",
"Martin Josef",
""
],
[
"Sevaux",
"Marc",
""
],
[
"Voss",
"Stefan",
""
]
] |
1109.3532 | Misha Denil | Misha Denil and Thomas Trappenberg | A Characterization of the Combined Effects of Overlap and Imbalance on
the SVM Classifier | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we demonstrate that two common problems in Machine
Learning---imbalanced and overlapping data distributions---do not have
independent effects on the performance of SVM classifiers. This result is
notable since it shows that a model of either of these factors must account for
the presence of the other. Our study of the relationship between these problems
has lead to the discovery of a previously unreported form of "covert"
overfitting which is resilient to commonly used empirical regularization
techniques. We demonstrate the existance of this covert phenomenon through
several methods based around the parametric regularization of trained SVMs. Our
findings in this area suggest a possible approach to quantifying overlap in
real world data sets.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2011 06:46:39 GMT"
}
] | 1,316,390,400,000 | [
[
"Denil",
"Misha",
""
],
[
"Trappenberg",
"Thomas",
""
]
] |
1109.3700 | Arnaud Martin | Florentin Smarandache (UNM), Arnaud Martin (IRISA), Christophe Osswald
(E3I2) | Contradiction measures and specificity degrees of basic belief
assignments | null | International Conference on Information Fusion, Chicago : United
States (2011) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the theory of belief functions, many measures of uncertainty have been
introduced. However, it is not always easy to understand what these measures
really try to represent. In this paper, we re-interpret some measures of
uncertainty in the theory of belief functions. We present some interests and
drawbacks of the existing measures. On these observations, we introduce a
measure of contradiction. Therefore, we present some degrees of non-specificity
and Bayesianity of a mass. We propose a degree of specificity based on the
distance between a mass and its most specific associated mass. We also show how
to use the degree of specificity to measure the specificity of a fusion rule.
Illustrations on simple examples are given.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2011 19:34:47 GMT"
}
] | 1,316,390,400,000 | [
[
"Smarandache",
"Florentin",
"",
"UNM"
],
[
"Martin",
"Arnaud",
"",
"IRISA"
],
[
"Osswald",
"Christophe",
"",
"E3I2"
]
] |
1109.3737 | Misha Denil | Misha Denil, Loris Bazzani, Hugo Larochelle and Nando de Freitas | Learning where to Attend with Deep Architectures for Image Tracking | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss an attentional model for simultaneous object tracking and
recognition that is driven by gaze data. Motivated by theories of perception,
the model consists of two interacting pathways: identity and control, intended
to mirror the what and where pathways in neuroscience models. The identity
pathway models object appearance and performs classification using deep
(factored)-Restricted Boltzmann Machines. At each point in time the
observations consist of foveated images, with decaying resolution toward the
periphery of the gaze. The control pathway models the location, orientation,
scale and speed of the attended object. The posterior distribution of these
states is estimated with particle filtering. Deeper in the control pathway, we
encounter an attentional mechanism that learns to select gazes so as to
minimize tracking uncertainty. Unlike in our previous work, we introduce gaze
selection strategies which operate in the presence of partial information and
on a continuous action space. We show that a straightforward extension of the
existing approach to the partial information setting results in poor
performance, and we propose an alternative method based on modeling the reward
surface as a Gaussian Process. This approach gives good performance in the
presence of partial information and allows us to expand the action space from a
small, discrete set of fixation points to a continuous domain.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2011 22:32:51 GMT"
}
] | 1,316,476,800,000 | [
[
"Denil",
"Misha",
""
],
[
"Bazzani",
"Loris",
""
],
[
"Larochelle",
"Hugo",
""
],
[
"de Freitas",
"Nando",
""
]
] |
1109.4335 | Xavier Mora | Rosa Camps, Xavier Mora, Laia Saumell | Social choice rules driven by propositional logic | The title has been changed | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several rules for social choice are examined from a unifying point of view
that looks at them as procedures for revising a system of degrees of belief in
accordance with certain specified logical constraints. Belief is here a social
attribute, its degrees being measured by the fraction of people who share a
given opinion. Different known rules and some new ones are obtained depending
on which particular constraints are assumed. These constraints allow to model
different notions of choiceness. In particular, we give a new method to deal
with approval-disapproval-preferential voting.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2011 10:44:23 GMT"
},
{
"version": "v2",
"created": "Tue, 9 Apr 2013 13:45:53 GMT"
},
{
"version": "v3",
"created": "Tue, 5 May 2015 17:04:10 GMT"
}
] | 1,430,870,400,000 | [
[
"Camps",
"Rosa",
""
],
[
"Mora",
"Xavier",
""
],
[
"Saumell",
"Laia",
""
]
] |
1109.4603 | Andrew Cotter | Andrew Cotter, Joseph Keshet and Nathan Srebro | Explicit Approximations of the Gaussian Kernel | 11 pages, 2 tables, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate training and using Gaussian kernel SVMs by approximating the
kernel with an explicit finite- dimensional polynomial feature representation
based on the Taylor expansion of the exponential. Although not as efficient as
the recently-proposed random Fourier features [Rahimi and Recht, 2007] in terms
of the number of features, we show how this polynomial representation can
provide a better approximation in terms of the computational cost involved.
This makes our "Taylor features" especially attractive for use on very large
data sets, in conjunction with online or stochastic training.
| [
{
"version": "v1",
"created": "Wed, 21 Sep 2011 18:11:05 GMT"
}
] | 1,316,649,600,000 | [
[
"Cotter",
"Andrew",
""
],
[
"Keshet",
"Joseph",
""
],
[
"Srebro",
"Nathan",
""
]
] |
1109.5072 | Jose Hernandez-Orallo | Javier Insa-Cabrera and Jose Hernandez-Orallo | Analysis of first prototype universal intelligence tests: evaluating and
comparing AI algorithms and humans | 114 pages, master thesis | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Today, available methods that assess AI systems are focused on using
empirical techniques to measure the performance of algorithms in some specific
tasks (e.g., playing chess, solving mazes or land a helicopter). However, these
methods are not appropriate if we want to evaluate the general intelligence of
AI and, even less, if we compare it with human intelligence. The ANYNT project
has designed a new method of evaluation that tries to assess AI systems using
well known computational notions and problems which are as general as possible.
This new method serves to assess general intelligence (which allows us to learn
how to solve any new kind of problem we face) and not only to evaluate
performance on a set of specific tasks. This method not only focuses on
measuring the intelligence of algorithms, but also to assess any intelligent
system (human beings, animals, AI, aliens?,...), and letting us to place their
results on the same scale and, therefore, to be able to compare them. This new
approach will allow us (in the future) to evaluate and compare any kind of
intelligent system known or even to build/find, be it artificial or biological.
This master thesis aims at ensuring that this new method provides consistent
results when evaluating AI algorithms, this is done through the design and
implementation of prototypes of universal intelligence tests and their
application to different intelligent systems (AI algorithms and humans beings).
From the study we analyze whether the results obtained by two different
intelligent systems are properly located on the same scale and we propose
changes and refinements to these prototypes in order to, in the future, being
able to achieve a truly universal intelligence test.
| [
{
"version": "v1",
"created": "Fri, 23 Sep 2011 13:36:10 GMT"
}
] | 1,316,995,200,000 | [
[
"Insa-Cabrera",
"Javier",
""
],
[
"Hernandez-Orallo",
"Jose",
""
]
] |
1109.5663 | S. Edelkamp | S. Edelkamp, J. Hoffmann | The Deterministic Part of IPC-4: An Overview | null | Journal Of Artificial Intelligence Research, Volume 24, pages
519-579, 2005 | 10.1613/jair.1677 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide an overview of the organization and results of the deterministic
part of the 4th International Planning Competition, i.e., of the part concerned
with evaluating systems doing deterministic planning. IPC-4 attracted even more
competing systems than its already large predecessors, and the competition
event was revised in several important respects. After giving an introduction
to the IPC, we briefly explain the main differences between the deterministic
part of IPC-4 and its predecessors. We then introduce formally the language
used, called PDDL2.2 that extends PDDL2.1 by derived predicates and timed
initial literals. We list the competing systems and overview the results of the
competition. The entire set of data is far too large to be presented in full.
We provide a detailed summary; the complete data is available in an online
appendix. We explain how we awarded the competition prizes.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 18:27:26 GMT"
}
] | 1,317,081,600,000 | [
[
"Edelkamp",
"S.",
""
],
[
"Hoffmann",
"J.",
""
]
] |
1109.5665 | D. McDermott | D. McDermott | PDDL2.1 - The Art of the Possible? Commentary on Fox and Long | null | Journal Of Artificial Intelligence Research, Volume 20, pages
145-148, 2003 | 10.1613/jair.1996 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PDDL2.1 was designed to push the envelope of what planning algorithms can do,
and it has succeeded. It adds two important features: durative actions,which
take time (and may have continuous effects); and objective functions for
measuring the quality of plans. The concept of durative actions is flawed; and
the treatment of their semantics reveals too strong an attachment to the way
many contemporary planners work. Future PDDL innovators should focus on
producing a clean semantics for additions to the language, and let planner
implementers worry about coupling their algorithms to problems expressed in the
latest version of the language.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 18:44:25 GMT"
}
] | 1,317,081,600,000 | [
[
"McDermott",
"D.",
""
]
] |
1109.5666 | D. E. Smith | D. E. Smith | The Case for Durative Actions: A Commentary on PDDL2.1 | null | Journal Of Artificial Intelligence Research, Volume 20, pages
149-154, 2003 | 10.1613/jair.1997 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The addition of durative actions to PDDL2.1 sparked some controversy. Fox and
Long argued that actions should be considered as instantaneous, but can start
and stop processes. Ultimately, a limited notion of durative actions was
incorporated into the language. I argue that this notion is still impoverished,
and that the underlying philosophical position of regarding durative actions as
being a shorthand for a start action, process, and stop action ignores the
realities of modelling and execution for complex systems.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 18:44:29 GMT"
}
] | 1,317,081,600,000 | [
[
"Smith",
"D. E.",
""
]
] |
1109.5711 | L. Li | L. Li, N. Onder, G. C. Whelan | Engineering a Conformant Probabilistic Planner | null | Journal Of Artificial Intelligence Research, Volume 25, pages
1-15, 2006 | 10.1613/jair.1701 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a partial-order, conformant, probabilistic planner, Probapop which
competed in the blind track of the Probabilistic Planning Competition in IPC-4.
We explain how we adapt distance based heuristics for use with probabilistic
domains. Probapop also incorporates heuristics based on probability of success.
We explain the successes and difficulties encountered during the design and
implementation of Probapop.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 20:20:27 GMT"
}
] | 1,317,168,000,000 | [
[
"Li",
"L.",
""
],
[
"Onder",
"N.",
""
],
[
"Whelan",
"G. C.",
""
]
] |
1109.5713 | J. Hoffmann | J. Hoffmann | Where 'Ignoring Delete Lists' Works: Local Search Topology in Planning
Benchmarks | null | Journal Of Artificial Intelligence Research, Volume 24, pages
685-758, 2005 | 10.1613/jair.1747 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Between 1998 and 2004, the planning community has seen vast progress in terms
of the sizes of benchmark examples that domain-independent planners can tackle
successfully. The key technique behind this progress is the use of heuristic
functions based on relaxing the planning task at hand, where the relaxation is
to assume that all delete lists are empty. The unprecedented success of such
methods, in many commonly used benchmark examples, calls for an understanding
of what classes of domains these methods are well suited for. In the
investigation at hand, we derive a formal background to such an understanding.
We perform a case study covering a range of 30 commonly used STRIPS and ADL
benchmark domains, including all examples used in the first four international
planning competitions. We *prove* connections between domain structure and
local search topology -- heuristic cost surface properties -- under an
idealized version of the heuristic functions used in modern planners. The
idealized heuristic function is called h^+, and differs from the practically
used functions in that it returns the length of an *optimal* relaxed plan,
which is NP-hard to compute. We identify several key characteristics of the
topology under h^+, concerning the existence/non-existence of unrecognized dead
ends, as well as the existence/non-existence of constant upper bounds on the
difficulty of escaping local minima and benches. These distinctions divide the
(set of all) planning domains into a taxonomy of classes of varying h^+
topology. As it turns out, many of the 30 investigated domains lie in classes
with a relatively easy topology. Most particularly, 12 of the domains lie in
classes where FFs search algorithm, provided with h^+, is a polynomial solving
mechanism. We also present results relating h^+ to its approximation as
implemented in FF. The behavior regarding dead ends is provably the same. We
summarize the results of an empirical investigation showing that, in many
domains, the topological qualities of h^+ are largely inherited by the
approximation. The overall investigation gives a rare example of a successful
analysis of the connections between typical-case problem structure, and search
performance. The theoretical investigation also gives hints on how the
topological phenomena might be automatically recognizable by domain analysis
techniques. We outline some preliminary steps we made into that direction.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2011 20:22:39 GMT"
}
] | 1,317,168,000,000 | [
[
"Hoffmann",
"J.",
""
]
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.