id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1607.08181 | Konstantin Yakovlev S | Aleksandr I. Panov, Konstantin Yakovlev | Psychologically inspired planning method for smart relocation task | As submitted to the 7th International Conference on Biologically
Inspired Cognitive Architectures (BICA 2016), New-York, USA, July 16-19 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Behavior planning is known to be one of the basic cognitive functions, which
is essential for any cognitive architecture of any control system used in
robotics. At the same time most of the widespread planning algorithms employed
in those systems are developed using only approaches and models of Artificial
Intelligence and don't take into account numerous results of cognitive
experiments. As a result, there is a strong need for novel methods of behavior
planning suitable for modern cognitive architectures aimed at robot control.
One such method is presented in this work and is studied within a special class
of navigation task called smart relocation task. The method is based on the
hierarchical two-level model of abstraction and knowledge representation, e.g.
symbolic and subsymbolic. On the symbolic level sign world model is used for
knowledge representation and hierarchical planning algorithm, PMA, is utilized
for planning. On the subsymbolic level the task of path planning is considered
and solved as a graph search problem. Interaction between both planners is
examined and inter-level interfaces and feedback loops are described.
Preliminary experimental results are presented.
| [
{
"version": "v1",
"created": "Wed, 27 Jul 2016 17:08:05 GMT"
}
] | 1,469,664,000,000 | [
[
"Panov",
"Aleksandr I.",
""
],
[
"Yakovlev",
"Konstantin",
""
]
] |
1607.08485 | Manuele Leonelli | Manuele Leonelli, Eva Riccomagno, Jim Q. Smith | A symbolic algebra for the computation of expected utilities in
multiplicative influence diagrams | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influence diagrams provide a compact graphical representation of decision
problems. Several algorithms for the quick computation of their associated
expected utilities are available in the literature. However, often they rely on
a full quantification of both probabilistic uncertainties and utility values.
For problems where all random variables and decision spaces are finite and
discrete, here we develop a symbolic way to calculate the expected utilities of
influence diagrams that does not require a full numerical representation.
Within this approach expected utilities correspond to families of polynomials.
After characterizing their polynomial structure, we develop an efficient
symbolic algorithm for the propagation of expected utilities through the
diagram and provide an implementation of this algorithm using a computer
algebra system. We then characterize many of the standard manipulations of
influence diagrams as transformations of polynomials. We also generalize the
decision analytic framework of these diagrams by defining asymmetries as
operations over the expected utility polynomials.
| [
{
"version": "v1",
"created": "Thu, 28 Jul 2016 14:47:52 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jan 2017 09:54:13 GMT"
}
] | 1,484,784,000,000 | [
[
"Leonelli",
"Manuele",
""
],
[
"Riccomagno",
"Eva",
""
],
[
"Smith",
"Jim Q.",
""
]
] |
1608.00139 | Taisuke Sato | Taisuke Sato | A Linear Algebraic Approach to Datalog Evaluation | 19 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a fundamentally new approach to Datalog evaluation.
Given a linear Datalog program DB written using N constants and binary
predicates, we first translate if-and-only-if completions of clauses in DB into
a set Eq(DB) of matrix equations with a non-linear operation where relations in
M_DB, the least Herbrand model of DB, are encoded as adjacency matrices. We
then translate Eq(DB) into another, but purely linear matrix equations
tilde_Eq(DB). It is proved that the least solution of tilde_Eq(DB) in the sense
of matrix ordering is converted to the least solution of Eq(DB) and the latter
gives M_DB as a set of adjacency matrices. Hence computing the least solution
of tilde_Eq(DB) is equivalent to computing M_DB specified by DB. For a class of
tail recursive programs and for some other types of programs, our approach
achieves O(N^3) time complexity irrespective of the number of variables in a
clause since only matrix operations costing O(N^3) or less are used.
We conducted two experiments that compute the least Herbrand models of linear
Datalog programs. The first experiment computes transitive closure of
artificial data and real network data taken from the Koblenz Network
Collection. The second one compared the proposed approach with the
state-of-the-art symbolic systems including two Prolog systems and two ASP
systems, in terms of computation time for a transitive closure program and the
same generation program. In the experiment, it is observed that our linear
algebraic approach runs 10^1 ~ 10^4 times faster than the symbolic systems when
data is not sparse. To appear in Theory and Practice of Logic Programming
(TPLP).
| [
{
"version": "v1",
"created": "Sat, 30 Jul 2016 16:14:16 GMT"
},
{
"version": "v2",
"created": "Fri, 24 Feb 2017 05:41:58 GMT"
}
] | 1,488,153,600,000 | [
[
"Sato",
"Taisuke",
""
]
] |
1608.00302 | Beishui Liao | Beishui Liao and Kang Xu and Huaxin Huang | Formulating Semantics of Probabilistic Argumentation by Characterizing
Subgraphs: Theory and Empirical Results | First version submitted to JLC on Feb 12, 2016. This is the final
version, accepted by JLC on Nov 28, 2016 | null | 10.1093/logcom/exx035 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In existing literature, while approximate approaches based on Monte-Carlo
simulation technique have been proposed to compute the semantics of
probabilistic argumentation, how to improve the efficiency of computation
without using simulation technique is still an open problem. In this paper, we
address this problem from the following two perspectives. First, conceptually,
we define specific properties to characterize the subgraphs of a PrAG with
respect to a given extension, such that the probability of a set of arguments E
being an extension can be defined in terms of these properties, without (or
with less) construction of subgraphs. Second, computationally, we take
preferred semantics as an example, and develop algorithms to evaluate the
efficiency of our approach. The results show that our approach not only
dramatically decreases the time for computing p(E^\sigma), but also has an
attractive property, which is contrary to that of existing approaches: the
denser the edges of a PrAG are or the bigger the size of a given extension E
is, the more efficient our approach computes p(E^\sigma). Meanwhile, it is
shown that under complete and preferred semantics, the problems of determining
p(E^\sigma) are fixed-parameter tractable.
| [
{
"version": "v1",
"created": "Mon, 1 Aug 2016 02:34:07 GMT"
},
{
"version": "v2",
"created": "Mon, 28 Nov 2016 21:16:47 GMT"
}
] | 1,508,889,600,000 | [
[
"Liao",
"Beishui",
""
],
[
"Xu",
"Kang",
""
],
[
"Huang",
"Huaxin",
""
]
] |
1608.00810 | Manuele Leonelli | Manuele Leonelli, Jim Q. Smith | Directed expected utility networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variety of statistical graphical models have been defined to represent the
conditional independences underlying a random vector of interest. Similarly,
many different graphs embedding various types of preferential independences, as
for example conditional utility independence and generalized additive
independence, have more recently started to appear. In this paper we define a
new graphical model, called a directed expected utility network, whose edges
depict both probabilistic and utility conditional independences. These embed a
very flexible class of utility models, much larger than those usually conceived
in standard influence diagrams. Our graphical representation, and various
transformations of the original graph into a tree structure, are then used to
guide fast routines for the computation of a decision problem's expected
utilities. We show that our routines generalize those usually utilized in
standard influence diagrams' evaluations under much more restrictive
conditions. We then proceed with the construction of a directed expected
utility network to support decision makers in the domain of household food
security.
| [
{
"version": "v1",
"created": "Tue, 2 Aug 2016 13:22:49 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Oct 2016 13:16:02 GMT"
}
] | 1,477,440,000,000 | [
[
"Leonelli",
"Manuele",
""
],
[
"Smith",
"Jim Q.",
""
]
] |
1608.01093 | Sarmimala Saikia | Ashwin Srinivasan, Gautam Shroff, Lovekesh Vig, Sarmimala Saikia,
Puneet Agarwal | Generation of Near-Optimal Solutions Using ILP-Guided Sampling | 7 pages | null | null | TR-EOIS-2016-1 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our interest in this paper is in optimisation problems that are intractable
to solve by direct numerical optimisation, but nevertheless have significant
amounts of relevant domain-specific knowledge. The category of heuristic search
techniques known as estimation of distribution algorithms (EDAs) seek to
incrementally sample from probability distributions in which optimal (or
near-optimal) solutions have increasingly higher probabilities. Can we use
domain knowledge to assist the estimation of these distributions? To answer
this in the affirmative, we need: (a)a general-purpose technique for the
incorporation of domain knowledge when constructing models for optimal values;
and (b)a way of using these models to generate new data samples. Here we
investigate a combination of the use of Inductive Logic Programming (ILP) for
(a), and standard logic-programming machinery to generate new samples for (b).
Specifically, on each iteration of distribution estimation, an ILP engine is
used to construct a model for good solutions. The resulting theory is then used
to guide the generation of new data instances, which are now restricted to
those derivable using the ILP model in conjunction with the background
knowledge). We demonstrate the approach on two optimisation problems
(predicting optimal depth-of-win for the KRK endgame, and job-shop scheduling).
Our results are promising: (a)On each iteration of distribution estimation,
samples obtained with an ILP theory have a substantially greater proportion of
good solutions than samples without a theory; and (b)On termination of
distribution estimation, samples obtained with an ILP theory contain more
near-optimal samples than samples without a theory. Taken together, these
results suggest that the use of ILP-constructed theories could be a useful
technique for incorporating complex domain-knowledge into estimation
distribution procedures.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 07:23:48 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Nov 2016 06:11:14 GMT"
}
] | 1,479,081,600,000 | [
[
"Srinivasan",
"Ashwin",
""
],
[
"Shroff",
"Gautam",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Saikia",
"Sarmimala",
""
],
[
"Agarwal",
"Puneet",
""
]
] |
1608.01302 | Caelan Garrett | Caelan Reed Garrett, Leslie Pack Kaelbling, Tomas Lozano-Perez | Learning to Rank for Synthesizing Planning Heuristics | null | International Joint Conference on Artificial Intelligence (IJCAI)
2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate learning heuristics for domain-specific planning. Prior work
framed learning a heuristic as an ordinary regression problem. However, in a
greedy best-first search, the ordering of states induced by a heuristic is more
indicative of the resulting planner's performance than mean squared error.
Thus, we instead frame learning a heuristic as a learning to rank problem which
we solve using a RankSVM formulation. Additionally, we introduce new methods
for computing features that capture temporal interactions in an approximate
plan. Our experiments on recent International Planning Competition problems
show that the RankSVM learned heuristics outperform both the original
heuristics and heuristics learned through ordinary regression.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 19:50:39 GMT"
}
] | 1,470,268,800,000 | [
[
"Garrett",
"Caelan Reed",
""
],
[
"Kaelbling",
"Leslie Pack",
""
],
[
"Lozano-Perez",
"Tomas",
""
]
] |
1608.01604 | Andrea Formisano | Stefania Costantini and Andrea Formisano | Query Answering in Resource-Based Answer Set Semantics | Paper presented at the 32nd International Conference on Logic
Programming (ICLP 2016), New York City, USA, 16-21 October 2016, 15 pages,
LaTeX, 3 PDF figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent work we defined resource-based answer set semantics, which is an
extension to answer set semantics stemming from the study of its relationship
with linear logic. In fact, the name of the new semantics comes from the fact
that in the linear-logic formulation every literal (including negative ones)
were considered as a resource. In this paper, we propose a query-answering
procedure reminiscent of Prolog for answer set programs under this extended
semantics as an extension of XSB-resolution for logic programs with negation.
We prove formal properties of the proposed procedure.
Under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Thu, 4 Aug 2016 16:38:52 GMT"
}
] | 1,470,355,200,000 | [
[
"Costantini",
"Stefania",
""
],
[
"Formisano",
"Andrea",
""
]
] |
1608.01835 | Bart Bogaerts | Bart Bogaerts and Tomi Janhunen and Shahab Tasharrofi | Stable-Unstable Semantics: Beyond NP with Normal Logic Programs | Paper presented at the 32nd International Conference on Logic
Programming (ICLP 2016), New York City, USA, 16-21 October 2016, 16 pages,
LaTeX, no figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Standard answer set programming (ASP) targets at solving search problems from
the first level of the polynomial time hierarchy (PH). Tackling search problems
beyond NP using ASP is less straightforward. The class of disjunctive logic
programs offers the most prominent way of reaching the second level of the PH,
but encoding respective hard problems as disjunctive programs typically
requires sophisticated techniques such as saturation or meta-interpretation.
The application of such techniques easily leads to encodings that are
inaccessible to non-experts. Furthermore, while disjunctive ASP solvers often
rely on calls to a (co-)NP oracle, it may be difficult to detect from the input
program where the oracle is being accessed. In other formalisms, such as
Quantified Boolean Formulas (QBFs), the interface to the underlying oracle is
more transparent as it is explicitly recorded in the quantifier prefix of a
formula. On the other hand, ASP has advantages over QBFs from the modeling
perspective. The rich high-level languages such as ASP-Core-2 offer a wide
variety of primitives that enable concise and natural encodings of search
problems. In this paper, we present a novel logic programming--based modeling
paradigm that combines the best features of ASP and QBFs. We develop so-called
combined logic programs in which oracles are directly cast as (normal) logic
programs themselves. Recursive incarnations of this construction enable logic
programming on arbitrarily high levels of the PH. We develop a proof-of-concept
implementation for our new paradigm.
This paper is under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 11:18:12 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Aug 2016 10:18:17 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Aug 2016 05:25:55 GMT"
}
] | 1,471,305,600,000 | [
[
"Bogaerts",
"Bart",
""
],
[
"Janhunen",
"Tomi",
""
],
[
"Tasharrofi",
"Shahab",
""
]
] |
1608.01946 | Mark Law | Mark Law, Alessandra Russo, Krysia Broda | Iterative Learning of Answer Set Programs from Context Dependent
Examples | Paper presented at the 32nd International Conference on Logic
Programming (ICLP 2016), New York City, USA, 16-21 October 2016, 22 pages,
LaTeX, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, several frameworks and systems have been proposed that
extend Inductive Logic Programming (ILP) to the Answer Set Programming (ASP)
paradigm. In ILP, examples must all be explained by a hypothesis together with
a given background knowledge. In existing systems, the background knowledge is
the same for all examples; however, examples may be context-dependent. This
means that some examples should be explained in the context of some
information, whereas others should be explained in different contexts. In this
paper, we capture this notion and present a context-dependent extension of the
Learning from Ordered Answer Sets framework. In this extension, contexts can be
used to further structure the background knowledge. We then propose a new
iterative algorithm, ILASP2i, which exploits this feature to scale up the
existing ILASP2 system to learning tasks with large numbers of examples. We
demonstrate the gain in scalability by applying both algorithms to various
learning tasks. Our results show that, compared to ILASP2, the newly proposed
ILASP2i system can be two orders of magnitude faster and use two orders of
magnitude less memory, whilst preserving the same average accuracy. This paper
is under consideration for acceptance in TPLP.
| [
{
"version": "v1",
"created": "Fri, 5 Aug 2016 17:33:23 GMT"
}
] | 1,470,614,400,000 | [
[
"Law",
"Mark",
""
],
[
"Russo",
"Alessandra",
""
],
[
"Broda",
"Krysia",
""
]
] |
1608.02287 | David Cox | David Cox | Delta Epsilon Alpha Star: A PAC-Admissible Search Algorithm | 8 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Delta Epsilon Alpha Star is a minimal coverage, real-time robotic search
algorithm that yields a moderately aggressive search path with minimal
backtracking. Search performance is bounded by a placing a combinatorial bound,
epsilon and delta, on the maximum deviation from the theoretical shortest path
and the probability at which further deviations can occur. Additionally, we
formally define the notion of PAC-admissibility -- a relaxed admissibility
criteria for algorithms, and show that PAC-admissible algorithms are better
suited to robotic search situations than epsilon-admissible or strict
algorithms.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 00:14:50 GMT"
}
] | 1,470,700,800,000 | [
[
"Cox",
"David",
""
]
] |
1608.02441 | Matthias Thimm | Sarah A. Gaggl, Matthias Thimm | Proceedings of the Second Summer School on Argumentation: Computational
and Linguistic Perspectives (SSA'16) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This volume contains the thesis abstracts presented at the Second Summer
School on Argumentation: Computational and Linguistic Perspectives (SSA'2016)
held on September 8-12 in Potsdam, Germany.
| [
{
"version": "v1",
"created": "Wed, 3 Aug 2016 09:05:32 GMT"
}
] | 1,470,700,800,000 | [
[
"Gaggl",
"Sarah A.",
""
],
[
"Thimm",
"Matthias",
""
]
] |
1608.02450 | Daniele Theseider Dupr\'e | Laura Giordano and Daniele Theseider Dupr\'e | ASP for Minimal Entailment in a Rational Extension of SROEL | Paper presented at the 32nd International Conference on Logic
Programming (ICLP 2016), New York City, USA, 16-21 October 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we exploit Answer Set Programming (ASP) for reasoning in a
rational extension SROEL-R-T of the low complexity description logic SROEL,
which underlies the OWL EL ontology language. In the extended language, a
typicality operator T is allowed to define concepts T(C) (typical C's) under a
rational semantics. It has been proven that instance checking under rational
entailment has a polynomial complexity. To strengthen rational entailment, in
this paper we consider a minimal model semantics. We show that, for arbitrary
SROEL-R-T knowledge bases, instance checking under minimal entailment is
\Pi^P_2-complete. Relying on a Small Model result, where models correspond to
answer sets of a suitable ASP encoding, we exploit Answer Set Preferences (and,
in particular, the asprin framework) for reasoning under minimal entailment.
The paper is under consideration for acceptance in Theory and Practice of Logic
Programming.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 14:26:46 GMT"
}
] | 1,470,700,800,000 | [
[
"Giordano",
"Laura",
""
],
[
"Dupré",
"Daniele Theseider",
""
]
] |
1608.02659 | Mohamed Ali Mahjoub | Anis Elbahi, Mohamed Nazih Omri, Mohamed Ali Mahjoub, Kamel Garrouch | Mouse Movement and Probabilistic Graphical Models Based E-Learning
Activity Recognition Improvement Possibilistic Model | in AJSE 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatically recognizing the e-learning activities is an important task for
improving the online learning process. Probabilistic graphical models such as
hidden Markov models and conditional random fields have been successfully used
in order to identify a Web users activity. For such models, the sequences of
observation are crucial for training and inference processes. Despite the
efficiency of these probabilistic graphical models in segmenting and labeling
stochastic sequences, their performance is adversely affected by the imperfect
quality of data used for the construction of sequences of observation. In this
paper, a formalism of the possibilistic theory will be used in order to propose
a new approach for observation sequences preparation. The eminent contribution
of our approach is to evaluate the effect of possibilistic reasoning during the
generation of observation sequences on the effectiveness of hidden Markov
models and conditional random fields models. Using a dataset containing 51 real
manipulations related to three types of learners tasks, the preliminary
experiments demonstrate that the sequences of observation obtained based on
possibilistic reasoning significantly improve the performance of hidden Marvov
models and conditional random fields models in the automatic recognition of the
e-learning activities.
| [
{
"version": "v1",
"created": "Mon, 8 Aug 2016 23:48:19 GMT"
}
] | 1,470,787,200,000 | [
[
"Elbahi",
"Anis",
""
],
[
"Omri",
"Mohamed Nazih",
""
],
[
"Mahjoub",
"Mohamed Ali",
""
],
[
"Garrouch",
"Kamel",
""
]
] |
1608.02682 | Jaroslaw Zola | Subhadeep Karan and Jaroslaw Zola | Exact Structure Learning of Bayesian Networks by Optimal Path Extension | Published in the IEEE BigData 2016, this version contains a
correction to Figure 1c | null | 10.1109/BigData.2016.7840588 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian networks are probabilistic graphical models often used in big data
analytics. The problem of exact structure learning is to find a network
structure that is optimal under certain scoring criteria. The problem is known
to be NP-hard and the existing methods are both computationally and memory
intensive. In this paper, we introduce a new approach for exact structure
learning. Our strategy is to leverage relationship between a partial network
structure and the remaining variables to constraint the number of ways in which
the partial network can be optimally extended. Via experimental results, we
show that the method provides up to three times improvement in runtime, and
orders of magnitude reduction in memory consumption over the current best
algorithms.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 03:07:50 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Nov 2016 04:45:15 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Mar 2017 14:47:03 GMT"
}
] | 1,490,140,800,000 | [
[
"Karan",
"Subhadeep",
""
],
[
"Zola",
"Jaroslaw",
""
]
] |
1608.02763 | Konstantin Yakovlev S | Konstantin Yakovlev, Anton Andreychuk | Resolving Spatial-Time Conflicts In A Set Of Any-angle Or
Angle-constrained Grid Paths | as submitted to the 2nd Workshop on Multi-Agent Path Finding
(http://www.andrew.cmu.edu/user/gswagner/workshop/ijcai_2016_multirobot_path_finding.html) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the multi-agent path finding problem (MAPF) for a group of agents
which are allowed to move into arbitrary directions on a 2D square grid. We
focus on centralized conflict resolution for independently computed plans. We
propose an algorithm that eliminates conflicts by using local re-planning and
introducing time offsets to the execution of paths by different agents.
Experimental results show that the algorithm can find high quality
conflict-free solutions at low computational cost.
| [
{
"version": "v1",
"created": "Tue, 9 Aug 2016 11:13:46 GMT"
}
] | 1,470,787,200,000 | [
[
"Yakovlev",
"Konstantin",
""
],
[
"Andreychuk",
"Anton",
""
]
] |
1608.03824 | Ashley Edwards | Ashley Edwards, Charles Isbell, Atsuo Takanishi | Perceptual Reward Functions | Deep Reinforcement Learning: Frontiers and Challenges Workshop, IJCAI
2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning problems are often described through rewards that
indicate if an agent has completed some task. This specification can yield
desirable behavior, however many problems are difficult to specify in this
manner, as one often needs to know the proper configuration for the agent. When
humans are learning to solve tasks, we often learn from visual instructions
composed of images or videos. Such representations motivate our development of
Perceptual Reward Functions, which provide a mechanism for creating visual task
descriptions. We show that this approach allows an agent to learn from rewards
that are based on raw pixels rather than internal parameters.
| [
{
"version": "v1",
"created": "Fri, 12 Aug 2016 15:29:05 GMT"
}
] | 1,471,219,200,000 | [
[
"Edwards",
"Ashley",
""
],
[
"Isbell",
"Charles",
""
],
[
"Takanishi",
"Atsuo",
""
]
] |
1608.04672 | Kurt Ammon | Kurt Ammon | Informal Physical Reasoning Processes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fundamental question is whether Turing machines can model all reasoning
processes. We introduce an existence principle stating that the perception of
the physical existence of any Turing program can serve as a physical causation
for the application of any Turing-computable function to this Turing program.
The existence principle overcomes the limitation of the outputs of Turing
machines to lists, that is, recursively enumerable sets. The principle is
illustrated by productive partial functions for productive sets such as the set
of the Goedel numbers of the Turing-computable total functions. The existence
principle and productive functions imply the existence of physical systems
whose reasoning processes cannot be modeled by Turing machines. These systems
are called creative. Creative systems can prove the undecidable formula in
Goedel's theorem in another formal system which is constructed at a later point
in time. A hypothesis about creative systems, which is based on computer
experiments, is introduced.
| [
{
"version": "v1",
"created": "Mon, 15 Aug 2016 16:51:38 GMT"
}
] | 1,471,392,000,000 | [
[
"Ammon",
"Kurt",
""
]
] |
1608.04996 | Kamyar Azizzadenesheli Ph.D. | Kamyar Azizzadenesheli, Alessandro Lazaric, and Animashree Anandkumar | Open Problem: Approximate Planning of POMDPs in the class of Memoryless
Policies | arXiv admin note: substantial text overlap with arXiv:1602.07764 | 29th Annual Conference on Learning Theory (2016) 1639--1642 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning plays an important role in the broad class of decision theory.
Planning has drawn much attention in recent work in the robotics and sequential
decision making areas. Recently, Reinforcement Learning (RL), as an
agent-environment interaction problem, has brought further attention to
planning methods. Generally in RL, one can assume a generative model, e.g.
graphical models, for the environment, and then the task for the RL agent is to
learn the model parameters and find the optimal strategy based on these learnt
parameters. Based on environment behavior, the agent can assume various types
of generative models, e.g. Multi Armed Bandit for a static environment, or
Markov Decision Process (MDP) for a dynamic environment. The advantage of these
popular models is their simplicity, which results in tractable methods of
learning the parameters and finding the optimal policy. The drawback of these
models is again their simplicity: these models usually underfit and
underestimate the actual environment behavior. For example, in robotics, the
agent usually has noisy observations of the environment inner state and MDP is
not a suitable model.
More complex models like Partially Observable Markov Decision Process (POMDP)
can compensate for this drawback. Fitting this model to the environment, where
the partial observation is given to the agent, generally gives dramatic
performance improvement, sometimes unbounded improvement, compared to MDP. In
general, finding the optimal policy for the POMDP model is computationally
intractable and fully non convex, even for the class of memoryless policies.
The open problem is to come up with a method to find an exact or an approximate
optimal stochastic memoryless policy for POMDP models.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 15:20:35 GMT"
}
] | 1,471,478,400,000 | [
[
"Azizzadenesheli",
"Kamyar",
""
],
[
"Lazaric",
"Alessandro",
""
],
[
"Anandkumar",
"Animashree",
""
]
] |
1608.05046 | Long Ouyang | Long Ouyang, Michael Henry Tessler, Daniel Ly, Noah Goodman | Practical optimal experiment design with probabilistic programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scientists often run experiments to distinguish competing theories. This
requires patience, rigor, and ingenuity - there is often a large space of
possible experiments one could run. But we need not comb this space by hand -
if we represent our theories as formal models and explicitly declare the space
of experiments, we can automate the search for good experiments, looking for
those with high expected information gain. Here, we present a general and
principled approach to experiment design based on probabilistic programming
languages (PPLs). PPLs offer a clean separation between declaring problems and
solving them, which means that the scientist can automate experiment design by
simply declaring her model and experiment spaces in the PPL without having to
worry about the details of calculating information gain. We demonstrate our
system in two case studies drawn from cognitive psychology, where we use it to
design optimal experiments in the domains of sequence prediction and
categorization. We find strong empirical validation that our automatically
designed experiments were indeed optimal. We conclude by discussing a number of
interesting questions for future research.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 18:59:23 GMT"
}
] | 1,471,478,400,000 | [
[
"Ouyang",
"Long",
""
],
[
"Tessler",
"Michael Henry",
""
],
[
"Ly",
"Daniel",
""
],
[
"Goodman",
"Noah",
""
]
] |
1608.05151 | Harm Van Seijen | Harm van Seijen | Effective Multi-step Temporal-Difference Learning for Non-Linear
Function Approximation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-step temporal-difference (TD) learning, where the update targets
contain information from multiple time steps ahead, is one of the most popular
forms of TD learning for linear function approximation. The reason is that
multi-step methods often yield substantially better performance than their
single-step counter-parts, due to a lower bias of the update targets. For
non-linear function approximation, however, single-step methods appear to be
the norm. Part of the reason could be that on many domains the popular
multi-step methods TD($\lambda$) and Sarsa($\lambda$) do not perform well when
combined with non-linear function approximation. In particular, they are very
susceptible to divergence of value estimates. In this paper, we identify the
reason behind this. Furthermore, based on our analysis, we propose a new
multi-step TD method for non-linear function approximation that addresses this
issue. We confirm the effectiveness of our method using two benchmark tasks
with neural networks as function approximation.
| [
{
"version": "v1",
"created": "Thu, 18 Aug 2016 01:21:27 GMT"
}
] | 1,471,564,800,000 | [
[
"van Seijen",
"Harm",
""
]
] |
1608.05609 | Bart Bogaerts | Joachim Jansen, Jo Devriendt, Bart Bogaerts, Gerda Janssens, Marc
Denecker | Implementing a Relevance Tracker Module | Paper presented at the 9th Workshop on Answer Set Programming and
Other Computing Paradigms (ASPOCP 2016), New York City, USA, 16 October 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PC(ID) extends propositional logic with inductive definitions: rule sets
under the well-founded semantics. Recently, a notion of relevance was
introduced for this language. This notion determines the set of undecided
literals that can still influence the satisfiability of a PC(ID) formula in a
given partial assignment. The idea is that the PC(ID) solver can make decisions
only on relevant literals without losing soundness and thus safely ignore
irrelevant literals.
One important insight that the relevance of a literal is completely
determined by the current solver state. During search, the solver state changes
have an effect on the relevance of literals. In this paper, we discuss an
incremental, lightweight implementation of a relevance tracker module that can
be added to and interact with an out-of-the-box SAT(ID) solver.
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 14:19:21 GMT"
}
] | 1,471,824,000,000 | [
[
"Jansen",
"Joachim",
""
],
[
"Devriendt",
"Jo",
""
],
[
"Bogaerts",
"Bart",
""
],
[
"Janssens",
"Gerda",
""
],
[
"Denecker",
"Marc",
""
]
] |
1608.05694 | Vladislav Kovchegov B | Vladislav B Kovchegov | The languages of actions, formal grammars and qualitive modeling of
companies | 40 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we discuss methods of using the language of actions, formal
languages, and grammars for qualitative conceptual linguistic modeling of
companies as technological and human institutions. The main problem following
the discussion is the problem to find and describe a language structure for
external and internal flow of information of companies. We anticipate that the
language structure of external and internal base flows determine the structure
of companies. In the structure modeling of an abstract industrial company an
internal base flow of information is constructed as certain flow of words
composed on the theoretical parts-processes-actions language. The language of
procedures is found for an external base flow of information for an insurance
company. The formal stochastic grammar for the language of procedures is found
by statistical methods and is used in understanding the tendencies of the
health care industry. We present the model of human communications as a random
walk on the semantic tree
| [
{
"version": "v1",
"created": "Fri, 19 Aug 2016 18:50:21 GMT"
}
] | 1,471,824,000,000 | [
[
"Kovchegov",
"Vladislav B",
""
]
] |
1608.06175 | Andrej Gajduk | Andrej Gajduk | Effectiveness of greedily collecting items in open world games | 3 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since Pokemon Go sent millions on the quest of collecting virtual monsters,
an important question has been on the minds of many people: Is going after the
closest item first a time-and-cost-effective way to play? Here, we show that
this is in fact a good strategy which performs on average only 7% worse than
the best possible solution in terms of the total distance traveled to gather
all the items. Even when accounting for errors due to the inability of people
to accurately measure distances by eye, the performance only goes down to 16%
of the optimal solution.
| [
{
"version": "v1",
"created": "Wed, 17 Aug 2016 20:43:56 GMT"
}
] | 1,471,910,400,000 | [
[
"Gajduk",
"Andrej",
""
]
] |
1608.06349 | Don Perlis | Don Perlis | Five dimensions of reasoning in the wild | minor typos corrected from AAAI version, Proceedings (Blue-Sky track)
AAAI-2016, Phoenix AZ | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning does not work well when done in isolation from its significance,
both to the needs and interests of an agent and with respect to the wider
world. Moreover, those issues may best be handled with a new sort of data
structure that goes beyond the knowledge base and incorporates aspects of
perceptual knowledge and even more, in which a kind of anticipatory action may
be key.
| [
{
"version": "v1",
"created": "Tue, 23 Aug 2016 00:40:27 GMT"
}
] | 1,471,996,800,000 | [
[
"Perlis",
"Don",
""
]
] |
1608.06787 | Natasha Alechina | Natasha Alechina, Mehdi Dastani, and Brian Logan | Expressibility of norms in temporal logic | 3 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this short note we address the issue of expressing norms (such as
obligations and prohibitions) in temporal logic. In particular, we address the
argument from [Governatori 2015] that norms cannot be expressed in Linear Time
Temporal Logic (LTL).
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2016 12:01:36 GMT"
}
] | 1,472,083,200,000 | [
[
"Alechina",
"Natasha",
""
],
[
"Dastani",
"Mehdi",
""
],
[
"Logan",
"Brian",
""
]
] |
1608.06845 | Salisu Abdulrahman | Salisu Mamman Abdulrahman, Pavel Brazdil | Effect of Incomplete Meta-dataset on Average Ranking Method | 8 pages, two figures and 6 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the simplest metalearning methods is the average ranking method. This
method uses metadata in the form of test results of a given set of algorithms
on given set of datasets and calculates an average rank for each algorithm. The
ranks are used to construct the average ranking. We investigate the problem of
how the process of generating the average ranking is affected by incomplete
metadata including fewer test results. This issue is relevant, because if we
could show that incomplete metadata does not affect the final results much, we
could explore it in future design. We could simply conduct fewer tests and save
thus computation time. In this paper we describe an upgraded average ranking
method that is capable of dealing with incomplete metadata. Our results show
that the proposed method is relatively robust to omission in test results in
the meta datasets.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2016 14:44:33 GMT"
}
] | 1,475,452,800,000 | [
[
"Abdulrahman",
"Salisu Mamman",
""
],
[
"Brazdil",
"Pavel",
""
]
] |
1608.06910 | Patrick Kahl | Patrick Thor Kahl, Anthony P. Leclerc, Tran Cao Son | A Parallel Memory-efficient Epistemic Logic Program Solver: Harder,
Better, Faster | Paper presented at the 9th Workshop on Answer Set Programming and
Other Computing Paradigms (ASPOCP 2016), New York City, USA, 16 October 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As the practical use of answer set programming (ASP) has grown with the
development of efficient solvers, we expect a growing interest in extensions of
ASP as their semantics stabilize and solvers supporting them mature. Epistemic
Specifications, which adds modal operators K and M to the language of ASP, is
one such extension. We call a program in this language an epistemic logic
program (ELP). Solvers have thus far been practical for only the simplest ELPs
due to exponential growth of the search space. We describe a solver that is
able to solve harder problems better (e.g., without exponentially-growing
memory needs w.r.t. K and M occurrences) and faster than any other known ELP
solver.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2016 18:18:08 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Oct 2016 16:25:52 GMT"
}
] | 1,476,403,200,000 | [
[
"Kahl",
"Patrick Thor",
""
],
[
"Leclerc",
"Anthony P.",
""
],
[
"Son",
"Tran Cao",
""
]
] |
1608.06954 | Hiroyuki Kasai | Hiromi Narimatsu and Hiroyuki Kasai | State Duration and Interval Modeling in Hidden Semi-Markov Model for
Sequential Data Analysis | null | Annals of Mathematics and Artificial Intelligence, vol.81, Issue
3-4, pp.377-403, 2017 | 10.1007/s10472-017-9561-y | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential data modeling and analysis have become indispensable tools for
analyzing sequential data, such as time-series data, because larger amounts of
sensed event data have become available. These methods capture the sequential
structure of data of interest, such as input-output relations and correlation
among datasets. However, because most studies in this area are specialized or
limited to their respective applications, rigorous requirement analysis of such
models has not been undertaken from a general perspective. Therefore, we
particularly examine the structure of sequential data, and extract the
necessity of `state duration' and `state interval' of events for efficient and
rich representation of sequential data. Specifically addressing the hidden
semi-Markov model (HSMM) that represents such state duration inside a model, we
attempt to add representational capability of a state interval of events onto
HSMM. To this end, we propose two extended models: an interval state hidden
semi-Markov model (IS-HSMM) to express the length of a state interval with a
special state node designated as "interval state node"; and an interval length
probability hidden semi-Markov model (ILP-HSMM) which represents the length of
the state interval with a new probabilistic parameter "interval length
probability." Exhaustive simulations have revealed superior performance of the
proposed models in comparison with HSMM. These proposed models are the first
reported extensions of HMM to support state interval representation as well as
state duration representation.
| [
{
"version": "v1",
"created": "Wed, 24 Aug 2016 20:11:14 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Feb 2019 23:05:06 GMT"
}
] | 1,550,188,800,000 | [
[
"Narimatsu",
"Hiromi",
""
],
[
"Kasai",
"Hiroyuki",
""
]
] |
1608.07223 | J. Quetzalcoatl Toledo-Marin | J. Quetzalc\'oatl Toledo-Mar\'in, Rogelio D\'iaz-M\'endez, Marcelo del
Castillo Mussot | Is a good offensive always the best defense? | 12 pages, 12 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A checkers-like model game with a simplified set of rules is studied through
extensive simulations of agents with different expertise and strategies. The
introduction of complementary strategies, in a quite general way, provides a
tool to mimic the basic ingredients of a wide scope of real games. We find that
only for the player having the higher offensive expertise (the dominant player
), maximizing the offensive always increases the probability to win. For the
non-dominant player, interestingly, a complete minimization of the offensive
becomes the best way to win in many situations, depending on the relative
values of the defense expertise. Further simulations on the interplay of
defense expertise were done separately, in the context of a fully-offensive
scenario, offering a starting point for analytical treatments. In particular,
we established that in this scenario the total number of moves is defined only
by the player with the lower defensive expertise. We believe that these results
stand for a first step towards a new way to improve decisions-making in a large
number of zero-sum real games.
| [
{
"version": "v1",
"created": "Tue, 23 Aug 2016 15:31:36 GMT"
}
] | 1,472,169,600,000 | [
[
"Toledo-Marín",
"J. Quetzalcóatl",
""
],
[
"Díaz-Méndez",
"Rogelio",
""
],
[
"Mussot",
"Marcelo del Castillo",
""
]
] |
1608.07225 | Joanna Tomasik | Pierre Berg\'e, Kaourintin Le Guiban, Arpad Rimmel, Joanna Tomasik | On Simulated Annealing Dedicated to Maximin Latin Hypercube Designs | extended version of ACM GECCO 2016 paper entitled "Search Space
Exploration and an Optimization Criterion for Hard Design Problems" | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of our research was to enhance local search heuristics used to
construct Latin Hypercube Designs. First, we introduce the \textit{1D-move}
perturbation to improve the space exploration performed by these algorithms.
Second, we propose a new evaluation function $\psi_{p,\sigma}$ specifically
targeting the Maximin criterion.
Exhaustive series of experiments with Simulated Annealing, which we used as a
typically well-behaving local search heuristics, confirm that our goal was
reached as the result we obtained surpasses the best scores reported in the
literature. Furthermore, the $\psi_{p,\sigma}$ function seems very promising
for a wide spectrum of optimization problems through the Maximin criterion.
| [
{
"version": "v1",
"created": "Tue, 23 Aug 2016 14:55:43 GMT"
}
] | 1,472,169,600,000 | [
[
"Bergé",
"Pierre",
""
],
[
"Guiban",
"Kaourintin Le",
""
],
[
"Rimmel",
"Arpad",
""
],
[
"Tomasik",
"Joanna",
""
]
] |
1608.07764 | Russell K. Standish | Russell K. Standish | The Movie Graph Argument Revisited | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we reexamine the Movie Graph Argument, which demonstrates a
basic incompatibility between computationalism and materialism. We discover
that the incompatibility is only manifest in singular classical-like universes.
If we accept that we live in a Multiverse, then the incompatibility goes away,
but in that case another line of argument shows that with computationalism, the
fundamental, or primitive materiality has no causal influence on what is
observed, which must must be derivable from basic arithmetic properties.
| [
{
"version": "v1",
"created": "Sun, 28 Aug 2016 04:18:39 GMT"
}
] | 1,472,515,200,000 | [
[
"Standish",
"Russell K.",
""
]
] |
1608.07846 | Henry Kim | Henry M. Kim, Jackie Ho Nam Cheung, Marek Laskowski, Iryna Gel | Data Analytics using Ontologies of Management Theories: Towards
Implementing 'From Theory to Practice' | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore how computational ontologies can be impactful vis-a-vis the
developing discipline of "data science." We posit an approach wherein
management theories are represented as formal axioms, and then applied to draw
inferences about data that reside in corporate databases. That is, management
theories would be implemented as rules within a data analytics engine. We
demonstrate a case study development of such an ontology by formally
representing an accounting theory in First-Order Logic. Though quite
preliminary, the idea that an information technology, namely ontologies, can
potentially actualize the academic cliche, "From Theory to Practice," and be
applicable to the burgeoning domain of data analytics is novel and exciting.
| [
{
"version": "v1",
"created": "Sun, 28 Aug 2016 19:51:31 GMT"
}
] | 1,472,515,200,000 | [
[
"Kim",
"Henry M.",
""
],
[
"Cheung",
"Jackie Ho Nam",
""
],
[
"Laskowski",
"Marek",
""
],
[
"Gel",
"Iryna",
""
]
] |
1608.08015 | Charles Prud'homme | Charles Prud'homme, Xavier Lorca and Narendra Jussien | Event Selection Rules to Compute Explanations | null | null | null | 15/1/INFO | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explanations have been introduced in the previous century. Their interest in
reducing the search space is no longer questioned. Yet, their efficient
implementation into CSP solver is still a challenge. In this paper, we
introduce ESeR, an Event Selection Rules algorithm that filters events
generated during propagation. This dynamic selection enables an efficient
computation of explanations for intelligent backtracking al- gorithms. We show
the effectiveness of our approach on the instances of the last three MiniZinc
challenges
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 12:07:04 GMT"
}
] | 1,472,515,200,000 | [
[
"Prud'homme",
"Charles",
""
],
[
"Lorca",
"Xavier",
""
],
[
"Jussien",
"Narendra",
""
]
] |
1608.08028 | Joris Mooij | Paul K. Rubenstein, Stephan Bongers, Bernhard Schoelkopf, Joris M.
Mooij | From Deterministic ODEs to Dynamic Structural Causal Models | Accepted for publication in Conference on Uncertainy in Artificial
Intelligence | Proceedings of the 35th Annual Conference on Uncertainty in
Artificial Intelligence (2018), 114-123 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural Causal Models are widely used in causal modelling, but how they
relate to other modelling tools is poorly understood. In this paper we provide
a novel perspective on the relationship between Ordinary Differential Equations
and Structural Causal Models. We show how, under certain conditions, the
asymptotic behaviour of an Ordinary Differential Equation under non-constant
interventions can be modelled using Dynamic Structural Causal Models. In
contrast to earlier work, we study not only the effect of interventions on
equilibrium states; rather, we model asymptotic behaviour that is dynamic under
interventions that vary in time, and include as a special case the study of
static equilibria.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 12:43:42 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Jul 2018 10:05:49 GMT"
}
] | 1,661,904,000,000 | [
[
"Rubenstein",
"Paul K.",
""
],
[
"Bongers",
"Stephan",
""
],
[
"Schoelkopf",
"Bernhard",
""
],
[
"Mooij",
"Joris M.",
""
]
] |
1608.08072 | Leslie Sikos Ph.D. | Leslie F. Sikos | A Novel Approach to Multimedia Ontology Engineering for Automated
Reasoning over Audiovisual LOD Datasets | null | null | 10.1007/978-3-662-49381-6_1 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimedia reasoning, which is suitable for, among others, multimedia content
analysis and high-level video scene interpretation, relies on the formal and
comprehensive conceptualization of the represented knowledge domain. However,
most multimedia ontologies are not exhaustive in terms of role definitions, and
do not incorporate complex role inclusions and role interdependencies. In fact,
most multimedia ontologies do not have a role box at all, and implement only a
basic subset of the available logical constructors. Consequently, their
application in multimedia reasoning is limited. To address the above issues,
VidOnt, the very first multimedia ontology with SROIQ(D) expressivity and a
DL-safe ruleset has been introduced for next-generation multimedia reasoning.
In contrast to the common practice, the formal grounding has been set in one of
the most expressive description logics, and the ontology validated with
industry-leading reasoners, namely HermiT and FaCT++. This paper also presents
best practices for developing multimedia ontologies, based on my ontology
engineering approach.
| [
{
"version": "v1",
"created": "Fri, 26 Aug 2016 05:53:07 GMT"
}
] | 1,472,515,200,000 | [
[
"Sikos",
"Leslie F.",
""
]
] |
1608.08144 | Vladimir Lifschitz | Vladimir Lifschitz | Achievements in Answer Set Programming | Revised version of a paper published in Theory and Practice of Logic
Programming | Theory and Practice of Logic Programming, Vol. 17, 2017 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes an approach to the methodology of answer set programming
(ASP) that can facilitate the design of encodings that are easy to understand
and provably correct. Under this approach, after appending a rule or a small
group of rules to the emerging program we include a comment that states what
has been "achieved" so far. This strategy allows us to set out our
understanding of the design of the program by describing the roles of small
parts of the program in a mathematically precise way.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 16:59:43 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Aug 2019 01:06:05 GMT"
}
] | 1,565,222,400,000 | [
[
"Lifschitz",
"Vladimir",
""
]
] |
1608.08262 | Yuanlin Zhang | Michael Gelfond and Yuanlin Zhang | Vicious Circle Principle and Formation of Sets in ASP Based Languages | Paper presented at the 9th Workshop on Answer Set Programming and
Other Computing Paradigms (ASPOCP 2016), New York City, USA, 16 October 2016 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The paper continues the investigation of Poincare and Russel's Vicious Circle
Principle (VCP) in the context of the design of logic programming languages
with sets. We expand previously introduced language Alog with aggregates by
allowing infinite sets and several additional set related constructs useful for
knowledge representation and teaching. In addition, we propose an alternative
formalization of the original VCP and incorporate it into the semantics of new
language, Slog+, which allows more liberal construction of sets and their use
in programming rules. We show that, for programs without disjunction and
infinite sets, the formal semantics of aggregates in Slog+ coincides with that
of several other known languages. Their intuitive and formal semantics,
however, are based on quite different ideas and seem to be more involved than
that of Slog+.
| [
{
"version": "v1",
"created": "Mon, 29 Aug 2016 21:58:07 GMT"
}
] | 1,472,601,600,000 | [
[
"Gelfond",
"Michael",
""
],
[
"Zhang",
"Yuanlin",
""
]
] |
1608.08447 | Bart Bogaerts | Jo Devriendt and Bart Bogaerts | BreakID: Static Symmetry Breaking for ASP (System Description) | Paper presented at the 9th Workshop on Answer Set Programming and
Other Computing Paradigms (ASPOCP 2016), New York City, USA, 16 October 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symmetry breaking has been proven to be an efficient preprocessing technique
for satisfiability solving (SAT). In this paper, we port the state-of-the-art
SAT symmetry breaker BreakID to answer set programming (ASP). The result is a
lightweight tool that can be plugged in between the grounding and the solving
phases that are common when modelling in ASP. We compare our tool with sbass,
the current state-of-the-art symmetry breaker for ASP.
| [
{
"version": "v1",
"created": "Tue, 30 Aug 2016 13:47:41 GMT"
}
] | 1,472,601,600,000 | [
[
"Devriendt",
"Jo",
""
],
[
"Bogaerts",
"Bart",
""
]
] |
1609.00030 | Marcello Balduccini | Marcello Balduccini, Daniele Magazzeni, Marco Maratea | PDDL+ Planning via Constraint Answer Set Programming | Paper presented at the 9th Workshop on Answer Set Programming and
Other Computing Paradigms (ASPOCP 2016), New York City, USA, 16 October 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PDDL+ is an extension of PDDL that enables modelling planning domains with
mixed discrete-continuous dynamics. In this paper we present a new approach to
PDDL+ planning based on Constraint Answer Set Programming (CASP), i.e. ASP
rules plus numerical constraints. To the best of our knowledge, ours is the
first attempt to link PDDL+ planning and logic programming. We provide an
encoding of PDDL+ models into CASP problems. The encoding can handle non-linear
hybrid domains, and represents a solid basis for applying logic programming to
PDDL+ planning. As a case study, we consider the EZCSP CASP solver and obtain
promising results on a set of PDDL+ benchmark problems.
| [
{
"version": "v1",
"created": "Wed, 31 Aug 2016 20:38:30 GMT"
}
] | 1,472,774,400,000 | [
[
"Balduccini",
"Marcello",
""
],
[
"Magazzeni",
"Daniele",
""
],
[
"Maratea",
"Marco",
""
]
] |
1609.00462 | Markus Wagner | Markus Wagner, Marius Lindauer, Mustafa Misir, Samadhi Nallaperuma,
Frank Hutter | A case study of algorithm selection for the traveling thief problem | 23 pages, this article is underview | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many real-world problems are composed of several interacting components. In
order to facilitate research on such interactions, the Traveling Thief Problem
(TTP) was created in 2013 as the combination of two well-understood
combinatorial optimization problems.
With this article, we contribute in four ways. First, we create a
comprehensive dataset that comprises the performance data of 21 TTP algorithms
on the full original set of 9720 TTP instances. Second, we define 55
characteristics for all TPP instances that can be used to select the best
algorithm on a per-instance basis. Third, we use these algorithms and features
to construct the first algorithm portfolios for TTP, clearly outperforming the
single best algorithm. Finally, we study which algorithms contribute most to
this portfolio.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 04:03:22 GMT"
}
] | 1,473,033,600,000 | [
[
"Wagner",
"Markus",
""
],
[
"Lindauer",
"Marius",
""
],
[
"Misir",
"Mustafa",
""
],
[
"Nallaperuma",
"Samadhi",
""
],
[
"Hutter",
"Frank",
""
]
] |
1609.00759 | Jo Devriendt | San Pham, Jo Devriendt, Maurice Bruynooghe, Patrick De Causmaecker | A MIP Backend for the IDP System | internal report, 10 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The IDP knowledge base system currently uses MiniSAT(ID) as its backend
Constraint Programming (CP) solver. A few similar systems have used a Mixed
Integer Programming (MIP) solver as backend. However, so far little is known
about when the MIP solver is preferable. This paper explores this question. It
describes the use of CPLEX as a backend for IDP and reports on experiments
comparing both backends.
| [
{
"version": "v1",
"created": "Fri, 2 Sep 2016 22:20:05 GMT"
}
] | 1,473,120,000,000 | [
[
"Pham",
"San",
""
],
[
"Devriendt",
"Jo",
""
],
[
"Bruynooghe",
"Maurice",
""
],
[
"De Causmaecker",
"Patrick",
""
]
] |
1609.01995 | Martha White | Martha White | Unifying task specification in reinforcement learning | Published at the International Conference on Machine Learning, 2017.
This version includes minor typo and error fixes | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning tasks are typically specified as Markov decision
processes. This formalism has been highly successful, though specifications
often couple the dynamics of the environment and the learning objective. This
lack of modularity can complicate generalization of the task specification, as
well as obfuscate connections between different task settings, such as episodic
and continuing. In this work, we introduce the RL task formalism, that provides
a unification through simple constructs including a generalization to
transition-based discounting. Through a series of examples, we demonstrate the
generality and utility of this formalism. Finally, we extend standard learning
constructs, including Bellman operators, and extend some seminal theoretical
results, including approximation errors bounds. Overall, we provide a
well-understood and sound formalism on which to build theoretical results and
simplify algorithm use and development.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 14:27:56 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Mar 2017 02:36:21 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Jul 2017 09:55:23 GMT"
},
{
"version": "v4",
"created": "Fri, 17 Sep 2021 22:26:09 GMT"
}
] | 1,632,182,400,000 | [
[
"White",
"Martha",
""
]
] |
1609.02139 | Robin Allesiardo | Robin Allesiardo, Rapha\"el F\'eraud and Odalric-Ambrym Maillard | Random Shuffling and Resets for the Non-stationary Stochastic Bandit
Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a non-stationary formulation of the stochastic multi-armed bandit
where the rewards are no longer assumed to be identically distributed. For the
best-arm identification task, we introduce a version of Successive Elimination
based on random shuffling of the $K$ arms. We prove that under a novel and mild
assumption on the mean gap $\Delta$, this simple but powerful modification
achieves the same guarantees in term of sample complexity and cumulative regret
than its original version, but in a much wider class of problems, as it is not
anymore constrained to stationary distributions. We also show that the original
{\sc Successive Elimination} fails to have controlled regret in this more
general scenario, thus showing the benefit of shuffling. We then remove our
mild assumption and adapt the algorithm to the best-arm identification task
with switching arms. We adapt the definition of the sample complexity for that
case and prove that, against an optimal policy with $N-1$ switches of the
optimal arm, this new algorithm achieves an expected sample complexity of
$O(\Delta^{-2}\sqrt{NK\delta^{-1} \log(K \delta^{-1})})$, where $\delta$ is the
probability of failure of the algorithm, and an expected cumulative regret of
$O(\Delta^{-1}{\sqrt{NTK \log (TK)}})$ after $T$ time steps.
| [
{
"version": "v1",
"created": "Wed, 7 Sep 2016 13:31:21 GMT"
}
] | 1,473,379,200,000 | [
[
"Allesiardo",
"Robin",
""
],
[
"Féraud",
"Raphaël",
""
],
[
"Maillard",
"Odalric-Ambrym",
""
]
] |
1609.02236 | Shanbo Chu | Shanbo Chu, Yong Jiang and Kewei Tu | Latent Dependency Forest Models | 10 pages, 3 figures, conference | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Probabilistic modeling is one of the foundations of modern machine learning
and artificial intelligence. In this paper, we propose a novel type of
probabilistic models named latent dependency forest models (LDFMs). A LDFM
models the dependencies between random variables with a forest structure that
can change dynamically based on the variable values. It is therefore capable of
modeling context-specific independence. We parameterize a LDFM using a
first-order non-projective dependency grammar. Learning LDFMs from data can be
formulated purely as a parameter learning problem, and hence the difficult
problem of model structure learning is circumvented. Our experimental results
show that LDFMs are competitive with existing probabilistic models.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 00:57:19 GMT"
},
{
"version": "v2",
"created": "Sun, 20 Nov 2016 15:51:35 GMT"
}
] | 1,479,772,800,000 | [
[
"Chu",
"Shanbo",
""
],
[
"Jiang",
"Yong",
""
],
[
"Tu",
"Kewei",
""
]
] |
1609.02584 | Patrick Rodler | Patrick Rodler | Towards Better Response Times and Higher-Quality Queries in Interactive
Knowledge Base Debugging | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many AI applications rely on knowledge encoded in a locigal knowledge base
(KB). The most essential benefit of such logical KBs is the opportunity to
perform automatic reasoning which however requires a KB to meet some minimal
quality criteria such as consistency. Without adequate tool assistance, the
task of resolving such violated quality criteria in a KB can be extremely hard,
especially when the problematic KB is large and complex. To this end,
interactive KB debuggers have been introduced which ask a user queries whether
certain statements must or must not hold in the intended domain. The given
answers help to gradually restrict the search space for KB repairs.
Existing interactive debuggers often rely on a pool-based strategy for query
computation. A pool of query candidates is precomputed, from which the best
candidate according to some query quality criterion is selected to be shown to
the user. This often leads to the generation of many unnecessary query
candidates and thus to a high number of expensive calls to logical reasoning
services. We tackle this issue by an in-depth mathematical analysis of diverse
real-valued active learning query selection measures in order to determine
qualitative criteria that make a query favorable. These criteria are the key to
devising efficient heuristic query search methods. The proposed methods enable
for the first time a completely reasoner-free query generation for interactive
KB debugging while at the same time guaranteeing optimality conditions, e.g.
minimal cardinality or best understandability for the user, of the generated
query that existing methods cannot realize.
Further, we study different relations between active learning measures. The
obtained picture gives a hint about which measures are more favorable in which
situation or which measures always lead to the same outcomes, based on given
types of queries.
| [
{
"version": "v1",
"created": "Thu, 8 Sep 2016 20:48:32 GMT"
},
{
"version": "v2",
"created": "Tue, 30 May 2017 09:57:45 GMT"
}
] | 1,496,188,800,000 | [
[
"Rodler",
"Patrick",
""
]
] |
1609.02646 | Ian Davidson | Sean Gilpin, Chia-Tung Kuo, Tina Eliassi-Rad, Ian Davidson | Some Advances in Role Discovery in Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Role discovery in graphs is an emerging area that allows analysis of complex
graphs in an intuitive way. In contrast to other graph prob- lems such as
community discovery, which finds groups of highly connected nodes, the role
discovery problem finds groups of nodes that share similar graph topological
structure. However, existing work so far has two severe limitations that
prevent its use in some domains. Firstly, it is completely unsupervised which
is undesirable for a number of reasons. Secondly, most work is limited to a
single relational graph. We address both these lim- itations in an intuitive
and easy to implement alternating least squares framework. Our framework allows
convex constraints to be placed on the role discovery problem which can provide
useful supervision. In par- ticular we explore supervision to enforce i)
sparsity, ii) diversity and iii) alternativeness. We then show how to lift this
work for multi-relational graphs. A natural representation of a
multi-relational graph is an order 3 tensor (rather than a matrix) and that a
Tucker decomposition allows us to find complex interactions between collections
of entities (E-groups) and the roles they play for a combination of relations
(R-groups). Existing Tucker decomposition methods in tensor toolboxes are not
suited for our purpose, so we create our own algorithm that we demonstrate is
pragmatically useful.
| [
{
"version": "v1",
"created": "Fri, 9 Sep 2016 03:13:55 GMT"
}
] | 1,473,638,400,000 | [
[
"Gilpin",
"Sean",
""
],
[
"Kuo",
"Chia-Tung",
""
],
[
"Eliassi-Rad",
"Tina",
""
],
[
"Davidson",
"Ian",
""
]
] |
1609.03145 | Volker Tresp | Volker Tresp and Maximilian Nickel | Relational Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We provide a survey on relational models. Relational models describe complete
networked {domains by taking into account global dependencies in the data}.
Relational models can lead to more accurate predictions if compared to
non-relational machine learning approaches. Relational models typically are
based on probabilistic graphical models, e.g., Bayesian networks, Markov
networks, or latent variable models. Relational models have applications in
social networks analysis, the modeling of knowledge graphs, bioinformatics,
recommendation systems, natural language processing, medical decision support,
and linked data.
| [
{
"version": "v1",
"created": "Sun, 11 Sep 2016 10:14:18 GMT"
}
] | 1,473,724,800,000 | [
[
"Tresp",
"Volker",
""
],
[
"Nickel",
"Maximilian",
""
]
] |
1609.03250 | Nan Ye | Nan Ye and Adhiraj Somani and David Hsu and Wee Sun Lee | DESPOT: Online POMDP Planning with Regularization | 36 pages | JAIR 58 (2017) 231-266 | 10.1613/jair.5328 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The partially observable Markov decision process (POMDP) provides a
principled general framework for planning under uncertainty, but solving POMDPs
optimally is computationally intractable, due to the "curse of dimensionality"
and the "curse of history". To overcome these challenges, we introduce the
Determinized Sparse Partially Observable Tree (DESPOT), a sparse approximation
of the standard belief tree, for online planning under uncertainty. A DESPOT
focuses online planning on a set of randomly sampled scenarios and compactly
captures the "execution" of all policies under these scenarios. We show that
the best policy obtained from a DESPOT is near-optimal, with a regret bound
that depends on the representation size of the optimal policy. Leveraging this
result, we give an anytime online planning algorithm, which searches a DESPOT
for a policy that optimizes a regularized objective function. Regularization
balances the estimated value of a policy under the sampled scenarios and the
policy size, thus avoiding overfitting. The algorithm demonstrates strong
experimental results, compared with some of the best online POMDP algorithms
available. It has also been incorporated into an autonomous driving system for
real-time vehicle control. The source code for the algorithm is available
online.
| [
{
"version": "v1",
"created": "Mon, 12 Sep 2016 02:12:13 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Mar 2017 07:28:31 GMT"
},
{
"version": "v3",
"created": "Tue, 19 Sep 2017 03:29:57 GMT"
}
] | 1,505,865,600,000 | [
[
"Ye",
"Nan",
""
],
[
"Somani",
"Adhiraj",
""
],
[
"Hsu",
"David",
""
],
[
"Lee",
"Wee Sun",
""
]
] |
1609.03765 | Umberto Grandi | Ulle Endriss and Umberto Grandi | Graph Aggregation | null | Artificial Intelligence, Volume 245, pages 86-114, 2017 | 10.1016/j.artint.2017.01.001 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph aggregation is the process of computing a single output graph that
constitutes a good compromise between several input graphs, each provided by a
different source. One needs to perform graph aggregation in a wide variety of
situations, e.g., when applying a voting rule (graphs as preference orders),
when consolidating conflicting views regarding the relationships between
arguments in a debate (graphs as abstract argumentation frameworks), or when
computing a consensus between several alternative clusterings of a given
dataset (graphs as equivalence relations). In this paper, we introduce a formal
framework for graph aggregation grounded in social choice theory. Our focus is
on understanding which properties shared by the individual input graphs will
transfer to the output graph returned by a given aggregation rule. We consider
both common properties of graphs, such as transitivity and reflexivity, and
arbitrary properties expressible in certain fragments of modal logic. Our
results establish several connections between the types of properties preserved
under aggregation and the choice-theoretic axioms satisfied by the rules used.
The most important of these results is a powerful impossibility theorem that
generalises Arrow's seminal result for the aggregation of preference orders to
a large collection of different types of graphs.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 11:08:23 GMT"
}
] | 1,528,848,000,000 | [
[
"Endriss",
"Ulle",
""
],
[
"Grandi",
"Umberto",
""
]
] |
1609.03847 | Daniel Bryce | Daniel Bryce, Sergiy Bogomolov, Alexander Heinz, Christian Schilling | Instrumenting an SMT Solver to Solve Hybrid Network Reachability
Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PDDL+ planning has its semantics rooted in hybrid automata (HA) and recent
work has shown that it can be modeled as a network of HAs. Addressing the
complexity of nonlinear PDDL+ planning as HAs requires both space and time
efficient reasoning. Unfortunately, existing solvers either do not address
nonlinear dynamics or do not natively support networks of automata.
We present a new algorithm, called HNSolve, which guides the variable
selection of the dReal Satisfiability Modulo Theories (SMT) solver while
reasoning about network encodings of nonlinear PDDL+ planning as HAs. HNSolve
tightly integrates with dReal by solving a discrete abstraction of the HA
network. HNSolve finds composite runs on the HA network that ignore continuous
variables, but respect mode jumps and synchronization labels. HNSolve
admissibly detects dead-ends in the discrete abstraction, and posts conflict
clauses that prune the SMT solver's search. We evaluate the benefits of our
HNSolve algorithm on PDDL+ benchmark problems and demonstrate its performance
with respect to prior work.
| [
{
"version": "v1",
"created": "Tue, 13 Sep 2016 14:17:32 GMT"
}
] | 1,473,811,200,000 | [
[
"Bryce",
"Daniel",
""
],
[
"Bogomolov",
"Sergiy",
""
],
[
"Heinz",
"Alexander",
""
],
[
"Schilling",
"Christian",
""
]
] |
1609.04648 | Thomas Voigtmann | A. Atashpendar, T. Schilling and Th. Voigtmann | Sequencing Chess | null | null | 10.1209/0295-5075/116/10009 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the structure of the state space of chess by means of transition
path sampling Monte Carlo simulation. Based on the typical number of moves
required to transpose a given configuration of chess pieces into another, we
conclude that the state space consists of several pockets between which
transitions are rare. Skilled players explore an even smaller subset of
positions that populate some of these pockets only very sparsely. These results
suggest that the usual measures to estimate both, the size of the state space
and the size of the tree of legal moves, are not unique indicators of the
complexity of the game, but that topological considerations are equally
important.
| [
{
"version": "v1",
"created": "Wed, 14 Sep 2016 10:13:42 GMT"
}
] | 1,482,278,400,000 | [
[
"Atashpendar",
"A.",
""
],
[
"Schilling",
"T.",
""
],
[
"Voigtmann",
"Th.",
""
]
] |
1609.04879 | Jeffrey Georgeson | Jeffrey Georgeson and Christopher Child | NPCs as People, Too: The Extreme AI Personality Engine | 9 pages, 3 tables, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PK Dick once asked "Do Androids Dream of Electric Sheep?" In video games, a
similar question could be asked of non-player characters: Do NPCs have dreams?
Can they live and change as humans do? Can NPCs have personalities, and can
these develop through interactions with players, other NPCs, and the world
around them? Despite advances in personality AI for games, most NPCs are still
undeveloped and undeveloping, reacting with flat affect and predictable
routines that make them far less than human--in fact, they become little more
than bits of the scenery that give out parcels of information. This need not be
the case. Extreme AI, a psychology-based personality engine, creates adaptive
NPC personalities. Originally developed as part of the thesis "NPCs as People:
Using Databases and Behaviour Trees to Give Non-Player Characters Personality,"
Extreme AI is now a fully functioning personality engine using all thirty
facets of the Five Factor model of personality and an AI system that is live
throughout gameplay. This paper discusses the research leading to Extreme AI;
develops the ideas found in that thesis; discusses the development of other
personality engines; and provides examples of Extreme AI's use in two game
demos.
| [
{
"version": "v1",
"created": "Thu, 15 Sep 2016 22:40:29 GMT"
}
] | 1,474,243,200,000 | [
[
"Georgeson",
"Jeffrey",
""
],
[
"Child",
"Christopher",
""
]
] |
1609.05140 | Pierre-Luc Bacon | Pierre-Luc Bacon, Jean Harb and Doina Precup | The Option-Critic Architecture | Accepted to the Thirthy-first AAAI Conference On Artificial
Intelligence (AAAI), 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal abstraction is key to scaling up learning and planning in
reinforcement learning. While planning with temporally extended actions is well
understood, creating such abstractions autonomously from data has remained
challenging. We tackle this problem in the framework of options [Sutton, Precup
& Singh, 1999; Precup, 2000]. We derive policy gradient theorems for options
and propose a new option-critic architecture capable of learning both the
internal policies and the termination conditions of options, in tandem with the
policy over options, and without the need to provide any additional rewards or
subgoals. Experimental results in both discrete and continuous environments
showcase the flexibility and efficiency of the framework.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 17:05:55 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Dec 2016 02:47:51 GMT"
}
] | 1,480,982,400,000 | [
[
"Bacon",
"Pierre-Luc",
""
],
[
"Harb",
"Jean",
""
],
[
"Precup",
"Doina",
""
]
] |
1609.05170 | Christophe Roche | Christophe Roche | Should Terminology Principles be re-examined? | Proceedings of the 10th Terminology and Knowledge Engineering
Conference (TKE 2012), pp.17-32. 19-22 June 2012, Madrid, Spain | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Operationalization of terminology for IT applications has revived the
Wusterian approach. The conceptual dimension once more prevails after taking
back seat to specialised lexicography. This is demonstrated by the emergence of
ontology in terminology. While the Terminology Principles as defined in Felber
manual and the ISO standards remain at the core of traditional terminology,
their computational implementation raises some issues. In this article, while
reiterating their importance, we will be re-examining these Principles from a
dual perspective: that of logic in the mathematical sense of the term and that
of epistemology as in the theory of knowledge. We will thus be clarifying and
describing some of them so as to take into account advances in knowledge
engineering (ontology) and formal systems (logic). The notion of
ontoterminology, terminology whose conceptual system is a formal ontology,
results from this approach.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 18:33:20 GMT"
}
] | 1,474,243,200,000 | [
[
"Roche",
"Christophe",
""
]
] |
1609.05224 | Anthony Young | Anthony P. Young, Sanjay Modgil, Odinaldo Rodrigues | Prioritised Default Logic as Argumentation with Partial Order Default
Priorities | 50 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We express Brewka's prioritised default logic (PDL) as argumentation using
ASPIC+. By representing PDL as argumentation and designing an argument
preference relation that takes the argument structure into account, we prove
that the conclusions of the justified arguments correspond to the PDL
extensions. We will first assume that the default priority is total, and then
generalise to the case where it is a partial order. This provides a
characterisation of non-monotonic inference in PDL as an exchange of argument
and counter-argument, providing a basis for distributed non-monotonic reasoning
in the form of dialogue.
| [
{
"version": "v1",
"created": "Thu, 25 Aug 2016 20:51:07 GMT"
}
] | 1,474,329,600,000 | [
[
"Young",
"Anthony P.",
""
],
[
"Modgil",
"Sanjay",
""
],
[
"Rodrigues",
"Odinaldo",
""
]
] |
1609.05566 | Russell Stewart | Russell Stewart, Stefano Ermon | Label-Free Supervision of Neural Networks with Physics and Domain
Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many machine learning applications, labeled data is scarce and obtaining
more labels is expensive. We introduce a new approach to supervising neural
networks by specifying constraints that should hold over the output space,
rather than direct examples of input-output pairs. These constraints are
derived from prior domain knowledge, e.g., from known laws of physics. We
demonstrate the effectiveness of this approach on real world and simulated
computer vision tasks. We are able to train a convolutional neural network to
detect and track objects without any labeled examples. Our approach can
significantly reduce the need for labeled training data, but introduces new
challenges for encoding prior knowledge into appropriate loss functions.
| [
{
"version": "v1",
"created": "Sun, 18 Sep 2016 23:16:14 GMT"
}
] | 1,474,329,600,000 | [
[
"Stewart",
"Russell",
""
],
[
"Ermon",
"Stefano",
""
]
] |
1609.05616 | Kumar Sankar Ray | Kumar Sankar Ray, Sandip Paul, Diganta Saha | Preorder-Based Triangle: A Modified Version of Bilattice-Based Triangle
for Belief Revision in Nonmonotonic Reasoning | null | Journal of Experimental & Theoretical Artificial Intelligence
Volume 30, 2018 - Issue 5 | 10.1080/0952813X.2018.1467493 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bilattice-based triangle provides an elegant algebraic structure for
reasoning with vague and uncertain information. But the truth and knowledge
ordering of intervals in bilattice-based triangle can not handle repetitive
belief revisions which is an essential characteristic of nonmonotonic
reasoning. Moreover the ordering induced over the intervals by the
bilattice-based triangle is not sometimes intuitive. In this work, we construct
an alternative algebraic structure, namely preorder-based triangle and we
formulate proper logical connectives for this. It is also demonstrated that
Preorder-based triangle serves to be a better alternative to the
bilattice-based triangle for reasoning in application areas, that involve
nonmonotonic fuzzy reasoning with uncertain information.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 07:28:43 GMT"
},
{
"version": "v2",
"created": "Tue, 3 Jan 2017 10:37:58 GMT"
},
{
"version": "v3",
"created": "Fri, 12 May 2017 09:04:31 GMT"
},
{
"version": "v4",
"created": "Tue, 7 Nov 2017 18:26:49 GMT"
}
] | 1,606,176,000,000 | [
[
"Ray",
"Kumar Sankar",
""
],
[
"Paul",
"Sandip",
""
],
[
"Saha",
"Diganta",
""
]
] |
1609.05632 | Tomas Teijeiro | Tom\'as Teijeiro and Paulo F\'elix | On the adoption of abductive reasoning for time series interpretation | 44 pages, 9 figures | Artificial Intelligence 262:163-188 (2018) | 10.1016/j.artint.2018.06.005 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series interpretation aims to provide an explanation of what is observed
in terms of its underlying processes. The present work is based on the
assumption that the common classification-based approaches to time series
interpretation suffer from a set of inherent weaknesses, whose ultimate cause
lies in the monotonic nature of the deductive reasoning paradigm. In this
document we propose a new approach to this problem, based on the initial
hypothesis that abductive reasoning properly accounts for the human ability to
identify and characterize the patterns appearing in a time series. The result
of this interpretation is a set of conjectures in the form of observations,
organized into an abstraction hierarchy and explaining what has been observed.
A knowledge-based framework and a set of algorithms for the interpretation task
are provided, implementing a hypothesize-and-test cycle guided by an
attentional mechanism. As a representative application domain, interpretation
of the electrocardiogram allows us to highlight the strengths of the proposed
approach in comparison with traditional classification-based approaches.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 08:31:18 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Dec 2017 11:15:01 GMT"
},
{
"version": "v3",
"created": "Mon, 25 Jun 2018 07:32:57 GMT"
}
] | 1,639,008,000,000 | [
[
"Teijeiro",
"Tomás",
""
],
[
"Félix",
"Paulo",
""
]
] |
1609.05705 | Renato Krohling | R.A. Krohling, Artem dos Santos, A.G.C. Pacheco | TODIM and TOPSIS with Z-numbers | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present an approach that is able to handle with Z-numbers
in the context of Multi-Criteria Decision Making (MCDM) problems. Z-numbers are
composed of two parts, the first one is a restriction on the values that can be
assumed, and the second part is the reliability of the information. As human
beings we communicate with other people by means of natural language using
sentences like: the journey time from home to university takes about half hour,
very likely. Firstly, Z-numbers are converted to fuzzy numbers using a standard
procedure. Next, the Z-TODIM and Z-TOPSIS are presented as a direct extension
of the fuzzy TODIM and fuzzy TOPSIS, respectively. The proposed methods are
applied to two case studies and compared with the standard approach using crisp
values. Results obtained show the feasibility of the approach. In addition, a
graphical interface was built to handle with both methods Z- TODIM and Z-TOPSIS
allowing ease of use for user in other areas of knowledge.
| [
{
"version": "v1",
"created": "Mon, 19 Sep 2016 13:13:19 GMT"
}
] | 1,474,329,600,000 | [
[
"Krohling",
"R. A.",
""
],
[
"Santos",
"Artem dos",
""
],
[
"Pacheco",
"A. G. C.",
""
]
] |
1609.06375 | Patrick Rodler | Patrick Rodler | A Theory of Interactive Debugging of Knowledge Bases in Monotonic Logics | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A broad variety of knowledge-based applications such as recommender, expert,
planning or configuration systems usually operate on the basis of knowledge
represented by means of some logical language. Such a logical knowledge base
(KB) enables intelligent behavior of such systems by allowing them to
automatically reason, answer queries of interest or solve complex real-world
problems. Nowadays, where information acquisition comes at low costs and often
happens automatically, the applied KBs are continuously growing in terms of
size, information content and complexity. These developments foster the
emergence of errors in these KBs and thus pose a significant challenge on all
people and tools involved in KB evolution, maintenance and application.
If some minimal quality criteria such as logical consistency are not met by
some KB, it becomes useless for knowledge-based applications. To guarantee the
compliance of KBs with given requirements, (non-interactive) KB debuggers have
been proposed. These however often cannot localize all potential faults,
suggest too large or incorrect modifications of the faulty KB or suffer from
poor scalability due to the inherent complexity of the KB debugging problem.
As a remedy to these issues, based on a well-founded theoretical basis this
work proposes complete, sound and optimal methods for the interactive debugging
of KBs that suggest the one (minimally invasive) error correction of the faulty
KB that yields a repaired KB with exactly the intended semantics. Users, e.g.
domain experts, are involved in the debugging process by answering
automatically generated queries whether some given statements must or must not
hold in the domain that should be modeled by the problematic KB at hand.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2016 22:31:38 GMT"
}
] | 1,474,502,400,000 | [
[
"Rodler",
"Patrick",
""
]
] |
1609.06953 | Azlan Iqbal | Azlan Iqbal | The Digital Synaptic Neural Substrate: Size and Quality Matters | 7 pages, 7 Figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the 'Digital Synaptic Neural Substrate' (DSNS) computational
creativity approach further with respect to the size and quality of images that
can be used to seed the process. In previous work we demonstrated how combining
photographs of people and sequences taken from chess games between weak players
can be used to generate chess problems or puzzles of higher aesthetic quality,
on average, compared to alternative approaches. In this work we show
experimentally that using larger images as opposed to smaller ones improves the
output quality even further. The same is also true for using clearer or less
corrupted images. The reasons why these things influence the DSNS process is
presently not well-understood and debatable but the findings are nevertheless
immediately applicable for obtaining better results.
| [
{
"version": "v1",
"created": "Tue, 20 Sep 2016 11:26:46 GMT"
}
] | 1,474,588,800,000 | [
[
"Iqbal",
"Azlan",
""
]
] |
1609.07102 | Jos\'e M. Gim\'enez-Garc\'ia | Jos\'e M. Gim\'enez-Garc\'ia, Antoine Zimmermann, Pierre Maret | NdFluents: A Multi-dimensional Contexts Ontology | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Annotating semantic data with metadata is becoming more and more important to
provide information about the statements being asserted. While initial
solutions proposed a data model to represent a specific dimension of
meta-information (such as time or provenance), the need for a general
annotation framework which allows representing different context dimensions is
needed. In this paper, we extend the 4dFluents ontology by Welty and Fikes---on
associating temporal validity to statements---to any dimension of context, and
discuss possible issues that multidimensional context representations have to
face and how we address them.
| [
{
"version": "v1",
"created": "Thu, 22 Sep 2016 18:37:12 GMT"
}
] | 1,474,588,800,000 | [
[
"Giménez-García",
"José M.",
""
],
[
"Zimmermann",
"Antoine",
""
],
[
"Maret",
"Pierre",
""
]
] |
1609.07772 | J. G. Wolff | J Gerard Wolff | Commonsense Reasoning, Commonsense Knowledge, and The SP Theory of
Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes how the "SP Theory of Intelligence" with the "SP
Computer Model", outlined in an Appendix, may throw light on aspects of
commonsense reasoning (CSR) and commonsense knowledge (CSK), as discussed in
another paper by Ernest Davis and Gary Marcus (DM). In four main sections, the
paper describes: 1) The main problems to be solved; 2) Other research on CSR
and CSK; 3) Why the SP system may prove useful with CSR and CSK 4) How examples
described by DM may be modelled in the SP system. With regard to successes in
the automation of CSR described by DM, the SP system's strengths in
simplification and integration may promote seamless integration across these
areas, and seamless integration of those area with other aspects of
intelligence. In considering challenges in the automation of CSR described by
DM, the paper describes in detail, with examples of SP-multiple-alignments. how
the SP system may model processes of interpretation and reasoning arising from
the horse's head scene in "The Godfather" film. A solution is presented to the
'long tail' problem described by DM. The SP system has some potentially useful
things to say about several of DM's objectives for research in CSR and CSK.
| [
{
"version": "v1",
"created": "Sun, 25 Sep 2016 16:48:16 GMT"
},
{
"version": "v2",
"created": "Sat, 4 Aug 2018 10:42:51 GMT"
}
] | 1,533,600,000,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1609.08439 | Dejanira Araiza-Illan | Dejanira Araiza-Illan, Anthony G. Pipe, Kerstin Eder | Model-based Test Generation for Robotic Software: Automata versus
Belief-Desire-Intention Agents | arXiv admin note: text overlap with arXiv:1603.00656 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Robotic code needs to be verified to ensure its safety and functional
correctness, especially when the robot is interacting with people. Testing real
code in simulation is a viable option. However, generating tests that cover
rare scenarios, as well as exercising most of the code, is a challenge
amplified by the complexity of the interactions between the environment and the
software. Model-based test generation methods can automate otherwise manual
processes and facilitate reaching rare scenarios during testing. In this paper,
we compare using Belief-Desire-Intention (BDI) agents as models for test
generation with more conventional automata-based techniques that exploit model
checking, in terms of practicality, performance, transferability to different
scenarios, and exploration (`coverage'), through two case studies: a
cooperative manufacturing task, and a home care scenario. The results highlight
the advantages of using BDI agents for test generation. BDI agents naturally
emulate the agency present in Human-Robot Interactions (HRIs), and are thus
more expressive than automata. The performance of the BDI-based test generation
is at least as high, and the achieved coverage is higher or equivalent,
compared to test generation based on model checking automata.
| [
{
"version": "v1",
"created": "Fri, 16 Sep 2016 14:07:28 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Dec 2016 11:23:48 GMT"
}
] | 1,481,587,200,000 | [
[
"Araiza-Illan",
"Dejanira",
""
],
[
"Pipe",
"Anthony G.",
""
],
[
"Eder",
"Kerstin",
""
]
] |
1609.08470 | Doron Friedman | Doron Friedman | A computer program for simulating time travel and a possible 'solution'
for the grandfather paradox | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While the possibility of time travel in physics is still debated, the
explosive growth of virtual-reality simulations opens up new possibilities to
rigorously explore such time travel and its consequences in the digital domain.
Here we provide a computational model of time travel and a computer program
that allows exploring digital time travel. In order to explain our method we
formalize a simplified version of the famous grandfather paradox, show how the
system can allow the participant to go back in time, try to kill their
ancestors before they were born, and experience the consequences. The system
has even come up with scenarios that can be considered consistent "solutions"
of the grandfather paradox. We discuss the conditions for digital time travel,
which indicate that it has a large number of practical applications.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2016 15:09:29 GMT"
}
] | 1,475,020,800,000 | [
[
"Friedman",
"Doron",
""
]
] |
1609.08524 | Tathagata Chakraborti | Tathagata Chakraborti, Kartik Talamadupula, Kshitij P. Fadnis, Murray
Campbell, Subbarao Kambhampati | UbuntuWorld 1.0 LTS - A Platform for Automated Problem Solving &
Troubleshooting in the Ubuntu OS | Appeared (under the same title) in AAAI/IAAI 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present UbuntuWorld 1.0 LTS - a platform for developing
automated technical support agents in the Ubuntu operating system.
Specifically, we propose to use the Bash terminal as a simulator of the Ubuntu
environment for a learning-based agent and demonstrate the usefulness of
adopting reinforcement learning (RL) techniques for basic problem solving and
troubleshooting in this environment. We provide a plug-and-play interface to
the simulator as a python package where different types of agents can be
plugged in and evaluated, and provide pathways for integrating data from online
support forums like AskUbuntu into an automated agent's learning process.
Finally, we show that the use of this data significantly improves the agent's
learning efficiency. We believe that this platform can be adopted as a
real-world test bed for research on automated technical support.
| [
{
"version": "v1",
"created": "Tue, 27 Sep 2016 16:42:30 GMT"
},
{
"version": "v2",
"created": "Sat, 12 Aug 2017 21:31:02 GMT"
}
] | 1,502,755,200,000 | [
[
"Chakraborti",
"Tathagata",
""
],
[
"Talamadupula",
"Kartik",
""
],
[
"Fadnis",
"Kshitij P.",
""
],
[
"Campbell",
"Murray",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1609.08925 | Ekaterina Arafailova | Ekaterina Arafailova and Nicolas Beldiceanu and R\'emi Douence and
Mats Carlsson and Pierre Flener and Mar\'ia Andre\'ina Francisco Rodr\'iguez
and Justin Pearson and Helmut Simonis | Global Constraint Catalog, Volume II, Time-Series Constraints | 3762 pages | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | First this report presents a restricted set of finite transducers used to
synthesise structural time-series constraints described by means of a
multi-layered function composition scheme. Second it provides the corresponding
synthesised catalogue of structural time-series constraints where each
constraint is explicitly described in terms of automata with registers.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2016 19:06:11 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Sep 2018 19:08:03 GMT"
}
] | 1,537,488,000,000 | [
[
"Arafailova",
"Ekaterina",
""
],
[
"Beldiceanu",
"Nicolas",
""
],
[
"Douence",
"Rémi",
""
],
[
"Carlsson",
"Mats",
""
],
[
"Flener",
"Pierre",
""
],
[
"Rodríguez",
"María Andreína Francisco",
""
],
[
"Pearson",
"Justin",
""
],
[
"Simonis",
"Helmut",
""
]
] |
1609.09253 | Ivan Grechikhin | Ivan S. Grechikhin | Heuristic with elements of tabu search for Truck and Trailer Routing
Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vehicle Routing Problem is a well-known problem in logistics and
transportation, and the variety of such problems is explained by the fact that
it occurs in many real-life situations. It is an NP-hard combinatorial
optimization problem and finding an exact optimal solution is practically
impossible. In this work, Site-Dependent Truck and Trailer Routing Problem with
hard and soft Time Windows and Split Deliveries is considered (SDTTRPTWSD). In
this article, we develop a heuristic with the elements of Tabu Search for
solving SDTTRPTWSD. The heuristic uses the concept of neighborhoods and visits
infeasible solutions during the search. A greedy heuristic is applied to
construct an initial solution.
| [
{
"version": "v1",
"created": "Thu, 29 Sep 2016 08:37:48 GMT"
}
] | 1,475,193,600,000 | [
[
"Grechikhin",
"Ivan S.",
""
]
] |
1609.09748 | Arnaud Martin | Amal Ben Rjab (LARODEC, DRUID), Mouloud Kharoune (DRUID), Zoltan
Miklos (DRUID), Arnaud Martin (DRUID) | Characterization of experts in crowdsourcing platforms | in The 4th International Conference on Belief Functions, Sep 2016,
Prague, Czech Republic | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowdsourcing platforms enable to propose simple human intelligence tasks to
a large number of participants who realise these tasks. The workers often
receive a small amount of money or the platforms include some other incentive
mechanisms, for example they can increase the workers reputation score, if they
complete the tasks correctly. We address the problem of identifying experts
among participants, that is, workers, who tend to answer the questions
correctly. Knowing who are the reliable workers could improve the quality of
knowledge one can extract from responses. As opposed to other works in the
literature, we assume that participants can give partial or incomplete
responses, in case they are not sure that their answers are correct. We model
such partial or incomplete responses with the help of belief functions, and we
derive a measure that characterizes the expertise level of each participant.
This measure is based on precise and exactitude degrees that represent two
parts of the expertise level. The precision degree reflects the reliability
level of the participants and the exactitude degree reflects the knowledge
level of the participants. We also analyze our model through simulation and
demonstrate that our richer model can lead to more reliable identification of
experts.
| [
{
"version": "v1",
"created": "Fri, 30 Sep 2016 14:23:42 GMT"
}
] | 1,475,452,800,000 | [
[
"Rjab",
"Amal Ben",
"",
"LARODEC, DRUID"
],
[
"Kharoune",
"Mouloud",
"",
"DRUID"
],
[
"Miklos",
"Zoltan",
"",
"DRUID"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
]
] |
1610.00378 | Joseph Ramsey | Joseph Ramsey | Improving Accuracy and Scalability of the PC Algorithm by Maximizing
P-value | 11 pages, 4 figures, 2 tables, technical report | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A number of attempts have been made to improve accuracy and/or scalability of
the PC (Peter and Clark) algorithm, some well known (Buhlmann, et al., 2010;
Kalisch and Buhlmann, 2007; 2008; Zhang, 2012, to give some examples). We add
here one more tool to the toolbox: the simple observation that if one is forced
to choose between a variety of possible conditioning sets for a pair of
variables, one should choose the one with the highest p-value. One can use the
CPC (Conservative PC, Ramsey et al., 2012) algorithm as a guide to possible
sepsets for a pair of variables. However, whereas CPC uses a voting rule to
classify colliders versus noncolliders, our proposed algorithm, PC-Max, picks
the conditioning set with the highest p-value, so that there are no
ambiguities. We combine this with two other optimizations: (a) avoiding
bidirected edges in the orientation of colliders, and (b) parallelization. For
(b) we borrow ideas from the PC-Stable algorithm (Colombo and Maathuis, 2014).
The result is an algorithm that scales quite well both in terms of accuracy and
time, with no risk of bidirected edges.
| [
{
"version": "v1",
"created": "Mon, 3 Oct 2016 00:47:51 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Oct 2016 17:24:45 GMT"
}
] | 1,475,712,000,000 | [
[
"Ramsey",
"Joseph",
""
]
] |
1610.00442 | Sixue Liu | Sixue Liu and Gerard de Melo | Should Algorithms for Random SAT and Max-SAT be Different? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze to what extent the random SAT and Max-SAT problems differ in their
properties. Our findings suggest that for random $k$-CNF with ratio in a
certain range, Max-SAT can be solved by any SAT algorithm with subexponential
slowdown, while for formulae with ratios greater than some constant, algorithms
under the random walk framework require substantially different heuristics. In
light of these results, we propose a novel probabilistic approach for random
Max-SAT called ProMS. Experimental results illustrate that ProMS outperforms
many state-of-the-art local search solvers on random Max-SAT benchmarks.
| [
{
"version": "v1",
"created": "Mon, 3 Oct 2016 08:30:47 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Nov 2018 07:27:51 GMT"
}
] | 1,541,376,000,000 | [
[
"Liu",
"Sixue",
""
],
[
"de Melo",
"Gerard",
""
]
] |
1610.00689 | Yexiang Xue | Yexiang Xue, Junwen Bai, Ronan Le Bras, Brendan Rappazzo, Richard
Bernstein, Johan Bjorck, Liane Longpre, Santosh K. Suram, Robert B. van
Dover, John Gregoire, Carla P. Gomes | Phase-Mapper: An AI Platform to Accelerate High Throughput Materials
Discovery | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-Throughput materials discovery involves the rapid synthesis,
measurement, and characterization of many different but structurally-related
materials. A key problem in materials discovery, the phase map identification
problem, involves the determination of the crystal phase diagram from the
materials' composition and structural characterization data. We present
Phase-Mapper, a novel AI platform to solve the phase map identification problem
that allows humans to interact with both the data and products of AI
algorithms, including the incorporation of human feedback to constrain or
initialize solutions. Phase-Mapper affords incorporation of any spectral
demixing algorithm, including our novel solver, AgileFD, which is based on a
convolutive non-negative matrix factorization algorithm. AgileFD can
incorporate constraints to capture the physics of the materials as well as
human feedback. We compare three solver variants with previously proposed
methods in a large-scale experiment involving 20 synthetic systems,
demonstrating the efficacy of imposing physical constrains using AgileFD.
Phase-Mapper has also been used by materials scientists to solve a wide variety
of phase diagrams, including the previously unsolved Nb-Mn-V oxide system,
which is provided here as an illustrative example.
| [
{
"version": "v1",
"created": "Mon, 3 Oct 2016 19:35:30 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Oct 2016 17:16:13 GMT"
}
] | 1,476,057,600,000 | [
[
"Xue",
"Yexiang",
""
],
[
"Bai",
"Junwen",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Rappazzo",
"Brendan",
""
],
[
"Bernstein",
"Richard",
""
],
[
"Bjorck",
"Johan",
""
],
[
"Longpre",
"Liane",
""
],
[
"Suram",
"Santosh K.",
""
],
[
"van Dover",
"Robert B.",
""
],
[
"Gregoire",
"John",
""
],
[
"Gomes",
"Carla P.",
""
]
] |
1610.01085 | Venkata Sriram Siddhardh (Sid) Nadendla | V. Sriram Siddhardh Nadendla, Swastik Brahma, Pramod K. Varshney | Towards the Design of Prospect-Theory based Human Decision Rules for
Hypothesis Testing | 8 pages, 5 figures, Presented at the 54th Annual Allerton Conference
on Communication, Control, and Computing, 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Detection rules have traditionally been designed for rational agents that
minimize the Bayes risk (average decision cost). With the advent of
crowd-sensing systems, there is a need to redesign binary hypothesis testing
rules for behavioral agents, whose cognitive behavior is not captured by
traditional utility functions such as Bayes risk. In this paper, we adopt
prospect theory based models for decision makers. We consider special agent
models namely optimists and pessimists in this paper, and derive optimal
detection rules under different scenarios. Using an illustrative example, we
also show how the decision rule of a human agent deviates from the Bayesian
decision rule under various behavioral models, considered in this paper.
| [
{
"version": "v1",
"created": "Tue, 4 Oct 2016 16:52:03 GMT"
}
] | 1,475,798,400,000 | [
[
"Nadendla",
"V. Sriram Siddhardh",
""
],
[
"Brahma",
"Swastik",
""
],
[
"Varshney",
"Pramod K.",
""
]
] |
1610.01381 | Alasdair Thomason | Alasdair Thomason, Nathan Griffiths, Victor Sanchez | The Predictive Context Tree: Predicting Contexts and Interactions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With a large proportion of people carrying location-aware smartphones, we
have an unprecedented platform from which to understand individuals and predict
their future actions. This work builds upon the Context Tree data structure
that summarises the historical contexts of individuals from augmented
geospatial trajectories, and constructs a predictive model for their likely
future contexts. The Predictive Context Tree (PCT) is constructed as a
hierarchical classifier, capable of predicting both the future locations that a
user will visit and the contexts that a user will be immersed within. The PCT
is evaluated over real-world geospatial trajectories, and compared against
existing location extraction and prediction techniques, as well as a proposed
hybrid approach that uses identified land usage elements in combination with
machine learning to predict future interactions. Our results demonstrate that
higher predictive accuracies can be achieved using this hybrid approach over
traditional extracted location datasets, and the PCT itself matches the
performance of the hybrid approach at predicting future interactions, while
adding utility in the form of context predictions. Such a prediction system is
capable of understanding not only where a user will visit, but also their
context, in terms of what they are likely to be doing.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2016 12:14:57 GMT"
}
] | 1,475,712,000,000 | [
[
"Thomason",
"Alasdair",
""
],
[
"Griffiths",
"Nathan",
""
],
[
"Sanchez",
"Victor",
""
]
] |
1610.01525 | Udi Apsel | Udi Apsel | Lifted Message Passing for the Generalized Belief Propagation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the lifted Generalized Belief Propagation (GBP) message passing
algorithm, for the computation of sum-product queries in Probabilistic
Relational Models (e.g. Markov logic network). The algorithm forms a compact
region graph and establishes a modified version of message passing, which
mimics the GBP behavior in a corresponding ground model. The compact graph is
obtained by exploiting a graphical representation of clusters, which reduces
cluster symmetry detection to isomorphism tests on small local graphs. The
framework is thus capable of handling complex models, while remaining
domain-size independent.
| [
{
"version": "v1",
"created": "Wed, 5 Oct 2016 16:56:02 GMT"
}
] | 1,475,712,000,000 | [
[
"Apsel",
"Udi",
""
]
] |
1610.02293 | Sandra Castellanos-Paez | Sandra Castellanos-Paez (LIG Laboratoire d'Informatique de Grenoble),
Damien Pellier (LIG Laboratoire d'Informatique de Grenoble), Humbert Fiorino
(LIG Laboratoire d'Informatique de Grenoble), Sylvie Pesty (LIG Laboratoire
d'Informatique de Grenoble) | Learning Macro-actions for State-Space Planning | Journ{\'e}es Francophones sur la Planification, la D{\'e}cision et
l'Apprentissage pour la conduite de syst{\`e}mes (JFPDA 2016) , Jul 2016,
Grenoble, France. 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning has achieved significant progress in recent years. Among the various
approaches to scale up plan synthesis, the use of macro-actions has been widely
explored. As a first stage towards the development of a solution to learn
on-line macro-actions, we propose an algorithm to identify useful macro-actions
based on data mining techniques. The integration in the planning search of
these learned macro-actions shows significant improvements over four classical
planning benchmarks.
| [
{
"version": "v1",
"created": "Fri, 7 Oct 2016 14:06:40 GMT"
}
] | 1,476,057,600,000 | [
[
"Castellanos-Paez",
"Sandra",
"",
"LIG Laboratoire d'Informatique de Grenoble"
],
[
"Pellier",
"Damien",
"",
"LIG Laboratoire d'Informatique de Grenoble"
],
[
"Fiorino",
"Humbert",
"",
"LIG Laboratoire d'Informatique de Grenoble"
],
[
"Pesty",
"Sylvie",
"",
"LIG Laboratoire\n d'Informatique de Grenoble"
]
] |
1610.02591 | Yexiang Xue | Yexiang Xue, Zhiyuan Li, Stefano Ermon, Carla P. Gomes, Bart Selman | Solving Marginal MAP Problems with NP Oracles and Parity Constraints | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Arising from many applications at the intersection of decision making and
machine learning, Marginal Maximum A Posteriori (Marginal MAP) Problems unify
the two main classes of inference, namely maximization (optimization) and
marginal inference (counting), and are believed to have higher complexity than
both of them. We propose XOR_MMAP, a novel approach to solve the Marginal MAP
Problem, which represents the intractable counting subproblem with queries to
NP oracles, subject to additional parity constraints. XOR_MMAP provides a
constant factor approximation to the Marginal MAP Problem, by encoding it as a
single optimization in polynomial size of the original problem. We evaluate our
approach in several machine learning and decision making applications, and show
that our approach outperforms several state-of-the-art Marginal MAP solvers.
| [
{
"version": "v1",
"created": "Sat, 8 Oct 2016 22:32:35 GMT"
},
{
"version": "v2",
"created": "Tue, 29 Nov 2016 21:22:06 GMT"
}
] | 1,480,550,400,000 | [
[
"Xue",
"Yexiang",
""
],
[
"Li",
"Zhiyuan",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Selman",
"Bart",
""
]
] |
1610.02707 | Yannis Assael | Hossam Mossalam, Yannis M. Assael, Diederik M. Roijers, Shimon
Whiteson | Multi-Objective Deep Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose Deep Optimistic Linear Support Learning (DOL) to solve
high-dimensional multi-objective decision problems where the relative
importances of the objectives are not known a priori. Using features from the
high-dimensional inputs, DOL computes the convex coverage set containing all
potential optimal solutions of the convex combinations of the objectives. To
our knowledge, this is the first time that deep reinforcement learning has
succeeded in learning multi-objective policies. In addition, we provide a
testbed with two experiments to be used as a benchmark for deep multi-objective
reinforcement learning.
| [
{
"version": "v1",
"created": "Sun, 9 Oct 2016 19:08:36 GMT"
}
] | 1,476,144,000,000 | [
[
"Mossalam",
"Hossam",
""
],
[
"Assael",
"Yannis M.",
""
],
[
"Roijers",
"Diederik M.",
""
],
[
"Whiteson",
"Shimon",
""
]
] |
1610.02847 | Daniel J Mankowitz | Daniel J. Mankowitz, Aviv Tamar and Shie Mannor | Situational Awareness by Risk-Conscious Skills | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical Reinforcement Learning has been previously shown to speed up the
convergence rate of RL planning algorithms as well as mitigate feature-based
model misspecification (Mankowitz et. al. 2016a,b, Bacon 2015). To do so, it
utilizes hierarchical abstractions, also known as skills -- a type of
temporally extended action (Sutton et. al. 1999) to plan at a higher level,
abstracting away from the lower-level details. We incorporate risk sensitivity,
also referred to as Situational Awareness (SA), into hierarchical RL for the
first time by defining and learning risk aware skills in a Probabilistic Goal
Semi-Markov Decision Process (PG-SMDP). This is achieved using our novel
Situational Awareness by Risk-Conscious Skills (SARiCoS) algorithm which comes
with a theoretical convergence guarantee. We show in a RoboCup soccer domain
that the learned risk aware skills exhibit complex human behaviors such as
`time-wasting' in a soccer game. In addition, the learned risk aware skills are
able to mitigate reward-based model misspecification.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2016 11:01:32 GMT"
}
] | 1,476,144,000,000 | [
[
"Mankowitz",
"Daniel J.",
""
],
[
"Tamar",
"Aviv",
""
],
[
"Mannor",
"Shie",
""
]
] |
1610.03024 | Kristijonas Cyras | Kristijonas \v{C}yras and Francesca Toni | ABA+: Assumption-Based Argumentation with Preferences | This is a preprint of a manuscript under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ABA+, a new approach to handling preferences in a well known
structured argumentation formalism, Assumption-Based Argumentation (ABA). In
ABA+, preference information given over assumptions is incorporated directly
into the attack relation, thus resulting in attack reversal. ABA+
conservatively extends ABA and exhibits various desirable features regarding
relationship among argumentation semantics as well as preference handling. We
also introduce Weak Contraposition, a principle concerning reasoning with rules
and preferences that relaxes the standard principle of contraposition, while
guaranteeing additional desirable features for ABA+.
| [
{
"version": "v1",
"created": "Mon, 10 Oct 2016 18:45:41 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Oct 2016 17:40:10 GMT"
}
] | 1,476,316,800,000 | [
[
"Čyras",
"Kristijonas",
""
],
[
"Toni",
"Francesca",
""
]
] |
1610.03573 | Azlan Iqbal | Paul Bonham and Azlan Iqbal | A Chain-Detection Algorithm for Two-Dimensional Grids | 28 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a general method of detecting valid chains or links of pieces on
a two-dimensional grid. Specifically, using the example of the chess variant
known as Switch-Side Chain-Chess (SSCC). Presently, no foolproof method of
detecting such chains in any given chess position is known and existing graph
theory, to our knowledge, is unable to fully address this problem either. We
therefore propose a solution implemented and tested using the C++ programming
language. We have been unable to find an incorrect result and therefore offer
it as the most viable solution thus far to the chain-detection problem in this
chess variant. The algorithm is also scalable, in principle, to areas beyond
two-dimensional grids such as 3D analysis and molecular chemistry.
| [
{
"version": "v1",
"created": "Wed, 12 Oct 2016 01:34:34 GMT"
}
] | 1,476,316,800,000 | [
[
"Bonham",
"Paul",
""
],
[
"Iqbal",
"Azlan",
""
]
] |
1610.04028 | Arash Andalib | Arash Andalib, Mehdi Zare, Farid Atry | A fuzzy expert system for earthquake prediction, case study: the Zagros
range | 4 pages, 4 figures in proceedings of the third International
Conference on Modeling, Simulation and Applied Optimization, 2009 Corrected
typos, added publication information, Corrected typo, Added publication
information | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A methodology for the development of a fuzzy expert system (FES) with
application to earthquake prediction is presented. The idea is to reproduce the
performance of a human expert in earthquake prediction. To do this, at the
first step, rules provided by the human expert are used to generate a fuzzy
rule base. These rules are then fed into an inference engine to produce a fuzzy
inference system (FIS) and to infer the results. In this paper, we have used a
Sugeno type fuzzy inference system to build the FES. At the next step, the
adaptive network-based fuzzy inference system (ANFIS) is used to refine the FES
parameters and improve its performance. The proposed framework is then employed
to attain the performance of a human expert used to predict earthquakes in the
Zagros area based on the idea of coupled earthquakes. While the prediction
results are promising in parts of the testing set, the general performance
indicates that prediction methodology based on coupled earthquakes needs more
investigation and more complicated reasoning procedure to yield satisfactory
predictions.
| [
{
"version": "v1",
"created": "Thu, 13 Oct 2016 11:18:02 GMT"
},
{
"version": "v2",
"created": "Wed, 17 May 2017 21:23:01 GMT"
}
] | 1,495,152,000,000 | [
[
"Andalib",
"Arash",
""
],
[
"Zare",
"Mehdi",
""
],
[
"Atry",
"Farid",
""
]
] |
1610.04073 | Wenhao Huang | Wenhao Huang, Ge Li, Zhi Jin | Improved Knowledge Base Completion by Path-Augmented TransR Model | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge base completion aims to infer new relations from existing
information. In this paper, we propose path-augmented TransR (PTransR) model to
improve the accuracy of link prediction. In our approach, we base PTransR model
on TransR, which is the best one-hop model at present. Then we regularize
TransR with information of relation paths. In our experiment, we evaluate
PTransR on the task of entity prediction. Experimental results show that
PTransR outperforms previous models.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2016 08:34:15 GMT"
}
] | 1,476,403,200,000 | [
[
"Huang",
"Wenhao",
""
],
[
"Li",
"Ge",
""
],
[
"Jin",
"Zhi",
""
]
] |
1610.04964 | Pavel Surynek | Pavel Surynek, Petr Michal\'ik | Improvements in Sub-optimal Solving of the $(N^2-1)$-Puzzle via Joint
Relocation of Pebbles and its Applications to Rule-based Cooperative
Path-Finding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of solving $(n^2-1)$-puzzle and cooperative path-finding (CPF)
sub-optimally by rule based algorithms is addressed in this manuscript. The
task in the puzzle is to rearrange $n^2-1$ pebbles on the square grid of the
size of n x n using one vacant position to a desired goal configuration. An
improvement to the existent polynomial-time algorithm is proposed and
experimentally analyzed. The improved algorithm is trying to move pebbles in a
more efficient way than the original algorithm by grouping them into so-called
snakes and moving them jointly within the snake. An experimental evaluation
showed that the algorithm using snakes produces solutions that are 8% to 9%
shorter than solutions generated by the original algorithm.
The snake-based relocation has been also integrated into rule-based
algorithms for solving the CPF problem sub-optimally, which is a closely
related task. The task in CPF is to relocate a group of abstract robots that
move over an undirected graph to given goal vertices. Robots can move to
unoccupied neighboring vertices and at most one robot can be placed in each
vertex. The $(n^2-1)$-puzzle is a special case of CPF where the underlying
graph is represented by a 4-connected grid and there is only one vacant vertex.
Two major rule-based algorithms for CPF were included in our study - BIBOX and
PUSH-and-SWAP (PUSH-and-ROTATE). Improvements gained by using snakes in the
BIBOX algorithm were stable around 30% in $(n^2-1)$-puzzle solving and up to
50% in CPFs over bi-connected graphs with various ear decompositions and
multiple vacant vertices. In the case of the PUSH-and-SWAP algorithm the
improvement achieved by snakes was around 5% to 8%. However, the improvement
was unstable and hardly predictable in the case of PUSH-and-SWAP.
| [
{
"version": "v1",
"created": "Mon, 17 Oct 2016 03:29:42 GMT"
}
] | 1,476,748,800,000 | [
[
"Surynek",
"Pavel",
""
],
[
"Michalík",
"Petr",
""
]
] |
1610.05402 | Luis Meira | Guilherme A. Zeni, Mauro Menzori, P. S. Martins, Luis A. A. Meira | VRPBench: A Vehicle Routing Benchmark Tool | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of optimization techniques in the combinatorial domain is large
and diversified. Nevertheless, there is still a lack of real benchmarks to
validate optimization algorithms. In this work we introduce VRPBench, a tool to
create instances and visualize solutions to the Vehicle Routing Problem (VRP)
in a planar graph embedded in the Euclidean 2D space. We use VRPBench to model
a real-world mail delivery case of the city of Artur Nogueira. Such scenarios
were characterized as a multi-objective optimization of the VRP. We extracted a
weighted graph from a digital map of the city to create a challenging benchmark
for the VRP. Each instance models one generic day of mail delivery with
hundreds to thousands of delivery points, thus allowing both the comparison and
validation of optimization algorithms for routing problems.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 02:01:16 GMT"
}
] | 1,476,835,200,000 | [
[
"Zeni",
"Guilherme A.",
""
],
[
"Menzori",
"Mauro",
""
],
[
"Martins",
"P. S.",
""
],
[
"Meira",
"Luis A. A.",
""
]
] |
1610.05452 | Pavel Surynek | Pavel Surynek | Makespan Optimal Solving of Cooperative Path-Finding via Reductions to
Propositional Satisfiability | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The problem of makespan optimal solving of cooperative path finding (CPF) is
addressed in this paper. The task in CPF is to relocate a group of agents in a
non-colliding way so that each agent eventually reaches its goal location from
the given initial location. The abstraction adopted in this work assumes that
agents are discrete items moving in an undirected graph by traversing edges.
Makespan optimal solving of CPF means to generate solutions that are as short
as possi-ble in terms of the total number of time steps required for the
execution of the solution.
We show that reducing CPF to propositional satisfiability (SAT) represents a
viable option for obtaining makespan optimal solutions. Several encodings of
CPF into propositional formulae are suggested and experimentally evaluated. The
evaluation indicates that SAT based CPF solving outperforms other makespan
optimal methods significantly in highly constrained situations (environments
that are densely occupied by agents).
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 06:42:45 GMT"
}
] | 1,476,835,200,000 | [
[
"Surynek",
"Pavel",
""
]
] |
1610.05556 | Marta Arias | Gilles Blondel and Marta Arias and Ricard Gavald\`a | Identifiability and Transportability in Dynamic Causal Networks | Presented at the 2016 ACM SIGKDD Workshop on Causal Discovery | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a causal analog to the purely observational Dynamic
Bayesian Networks, which we call Dynamic Causal Networks. We provide a sound
and complete algorithm for identification of Dynamic Causal Net- works, namely,
for computing the effect of an intervention or experiment, based on passive
observations only, whenever possible. We note the existence of two types of
confounder variables that affect in substantially different ways the iden-
tification procedures, a distinction with no analog in either Dynamic Bayesian
Networks or standard causal graphs. We further propose a procedure for the
transportability of causal effects in Dynamic Causal Network settings, where
the re- sult of causal experiments in a source domain may be used for the
identification of causal effects in a target domain.
| [
{
"version": "v1",
"created": "Tue, 18 Oct 2016 12:07:03 GMT"
}
] | 1,476,835,200,000 | [
[
"Blondel",
"Gilles",
""
],
[
"Arias",
"Marta",
""
],
[
"Gavaldà",
"Ricard",
""
]
] |
1610.06009 | Anand Kulkarni Dr | Omkar Kulkarni, Ninad Kulkarni, Anand J Kulkarni, Ganesh Kakandikar | Constrained Cohort Intelligence using Static and Dynamic Penalty
Function Approach for Mechanical Components Design | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the metaheuristics can efficiently solve unconstrained problems;
however, their performance may degenerate if the constraints are involved. This
paper proposes two constraint handling approaches for an emerging metaheuristic
of Cohort Intelligence (CI). More specifically CI with static penalty function
approach (SCI) and CI with dynamic penalty function approach (DCI) are
proposed. The approaches have been tested by solving several constrained test
problems. The performance of the SCI and DCI have been compared with algorithms
like GA, PSO, ABC, d-Ds. In addition, as well as three real world problems from
mechanical engineering domain with improved solutions. The results were
satisfactory and validated the applicability of CI methodology for solving real
world problems.
| [
{
"version": "v1",
"created": "Mon, 26 Sep 2016 16:36:39 GMT"
}
] | 1,476,921,600,000 | [
[
"Kulkarni",
"Omkar",
""
],
[
"Kulkarni",
"Ninad",
""
],
[
"Kulkarni",
"Anand J",
""
],
[
"Kakandikar",
"Ganesh",
""
]
] |
1610.06473 | Benjam\'in Bedregal Prof. | Benjamin Bedregal, Humberto Bustince, Eduardo Palmeira, Gra\c{c}aliz
Pereira Dimuro and Javier Fernandez | Generalized Interval-valued OWA Operators with Interval Weights Derived
from Interval-valued Overlap Functions | null | null | 10.1016/j.ijar.2017.07.001 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we extend to the interval-valued setting the notion of an
overlap functions and we discuss a method which makes use of interval-valued
overlap functions for constructing OWA operators with interval-valued weights.
. Some properties of interval-valued overlap functions and the derived
interval-valued OWA operators are analysed. We specially focus on the
homogeneity and migrativity properties.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2016 16:02:59 GMT"
}
] | 1,557,446,400,000 | [
[
"Bedregal",
"Benjamin",
""
],
[
"Bustince",
"Humberto",
""
],
[
"Palmeira",
"Eduardo",
""
],
[
"Dimuro",
"Graçaliz Pereira",
""
],
[
"Fernandez",
"Javier",
""
]
] |
1610.06490 | Oleksii Tyshchenko Dr | Zhengbing Hu, Yevgeniy V. Bodyanskiy, Oleksii K. Tyshchenko and Olena
O. Boiko | An Ensemble of Adaptive Neuro-Fuzzy Kohonen Networks for Online Data
Stream Fuzzy Clustering | null | I.J. Modern Education and Computer Science, 2016, 5, 12-18 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new approach to data stream clustering with the help of an ensemble of
adaptive neuro-fuzzy systems is proposed. The proposed ensemble is formed with
adaptive neuro-fuzzy self-organizing Kohonen maps in a parallel processing
mode. A final result is chosen by the best neuro-fuzzy self-organizing Kohonen
map.
| [
{
"version": "v1",
"created": "Thu, 20 Oct 2016 16:30:25 GMT"
}
] | 1,477,008,000,000 | [
[
"Hu",
"Zhengbing",
""
],
[
"Bodyanskiy",
"Yevgeniy V.",
""
],
[
"Tyshchenko",
"Oleksii K.",
""
],
[
"Boiko",
"Olena O.",
""
]
] |
1610.06912 | Prakhar Ojha | Prakhar Ojha, Partha Talukdar | KGEval: Estimating Accuracy of Automatically Constructed Knowledge
Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic construction of large knowledge graphs (KG) by mining web-scale
text datasets has received considerable attention recently. Estimating accuracy
of such automatically constructed KGs is a challenging problem due to their
size and diversity. This important problem has largely been ignored in prior
research we fill this gap and propose KGEval. KGEval binds facts of a KG using
coupling constraints and crowdsources the facts that infer correctness of large
parts of the KG. We demonstrate that the objective optimized by KGEval is
submodular and NP-hard, allowing guarantees for our approximation algorithm.
Through extensive experiments on real-world datasets, we demonstrate that
KGEval is able to estimate KG accuracy more accurately compared to other
competitive baselines, while requiring significantly lesser number of human
evaluations.
| [
{
"version": "v1",
"created": "Fri, 21 Oct 2016 19:49:19 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Dec 2016 06:45:34 GMT"
}
] | 1,480,636,800,000 | [
[
"Ojha",
"Prakhar",
""
],
[
"Talukdar",
"Partha",
""
]
] |
1610.07045 | Yixuan (Julie) Zhu | Julie Yixuan Zhu, Chao Zhang, Huichu Zhang, Shi Zhi, Victor O.K. Li,
Jiawei Han, Yu Zheng | pg-Causality: Identifying Spatiotemporal Causal Pathways for Air
Pollutants with Urban Big Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many countries are suffering from severe air pollution. Understanding how
different air pollutants accumulate and propagate is critical to making
relevant public policies. In this paper, we use urban big data (air quality
data and meteorological data) to identify the \emph{spatiotemporal (ST) causal
pathways} for air pollutants. This problem is challenging because: (1) there
are numerous noisy and low-pollution periods in the raw air quality data, which
may lead to unreliable causality analysis, (2) for large-scale data in the ST
space, the computational complexity of constructing a causal structure is very
high, and (3) the \emph{ST causal pathways} are complex due to the interactions
of multiple pollutants and the influence of environmental factors. Therefore,
we present \emph{p-Causality}, a novel pattern-aided causality analysis
approach that combines the strengths of \emph{pattern mining} and
\emph{Bayesian learning} to efficiently and faithfully identify the \emph{ST
causal pathways}. First, \emph{Pattern mining} helps suppress the noise by
capturing frequent evolving patterns (FEPs) of each monitoring sensor, and
greatly reduce the complexity by selecting the pattern-matched sensors as
"causers". Then, \emph{Bayesian learning} carefully encodes the local and ST
causal relations with a Gaussian Bayesian network (GBN)-based graphical model,
which also integrates environmental influences to minimize biases in the final
results. We evaluate our approach with three real-world data sets containing
982 air quality sensors, in three regions of China from 01-Jun-2013 to
19-Dec-2015. Results show that our approach outperforms the traditional causal
structure learning methods in time efficiency, inference accuracy and
interpretability.
| [
{
"version": "v1",
"created": "Sat, 22 Oct 2016 13:17:28 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Nov 2017 08:30:29 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Apr 2018 07:39:53 GMT"
}
] | 1,524,096,000,000 | [
[
"Zhu",
"Julie Yixuan",
""
],
[
"Zhang",
"Chao",
""
],
[
"Zhang",
"Huichu",
""
],
[
"Zhi",
"Shi",
""
],
[
"Li",
"Victor O. K.",
""
],
[
"Han",
"Jiawei",
""
],
[
"Zheng",
"Yu",
""
]
] |
1610.07388 | L\'aszl\'o Csat\'o | L\'aszl\'o Csat\'o | Characterization of an inconsistency ranking for pairwise comparison
matrices | 13 pages | Annals of Operations Research, 261(1-2): 155-165, 2018 | 10.1007/s10479-017-2627-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise comparisons between alternatives are a well-known method for
measuring preferences of a decision-maker. Since these often do not exhibit
consistency, a number of inconsistency indices has been introduced in order to
measure the deviation from this ideal case. We axiomatically characterize the
inconsistency ranking induced by the Koczkodaj inconsistency index: six
independent properties are presented such that they determine a unique linear
order on the set of all pairwise comparison matrices.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 12:37:36 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jan 2017 12:48:49 GMT"
},
{
"version": "v3",
"created": "Mon, 12 Jun 2017 13:23:02 GMT"
}
] | 1,560,988,800,000 | [
[
"Csató",
"László",
""
]
] |
1610.07505 | Ahmed Alaa | Ahmed M. Alaa and Mihaela van der Schaar | Balancing Suspense and Surprise: Timely Decision Making with Endogenous
Information Acquisition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a Bayesian model for decision-making under time pressure with
endogenous information acquisition. In our model, the decision maker decides
when to observe (costly) information by sampling an underlying continuous-time
stochastic process (time series) that conveys information about the potential
occurrence or non-occurrence of an adverse event which will terminate the
decision-making process. In her attempt to predict the occurrence of the
adverse event, the decision-maker follows a policy that determines when to
acquire information from the time series (continuation), and when to stop
acquiring information and make a final prediction (stopping). We show that the
optimal policy has a rendezvous structure, i.e. a structure in which whenever a
new information sample is gathered from the time series, the optimal "date" for
acquiring the next sample becomes computable. The optimal interval between two
information samples balances a trade-off between the decision maker's surprise,
i.e. the drift in her posterior belief after observing new information, and
suspense, i.e. the probability that the adverse event occurs in the time
interval between two information samples. Moreover, we characterize the
continuation and stopping regions in the decision-maker's state-space, and show
that they depend not only on the decision-maker's beliefs, but also on the
context, i.e. the current realization of the time series.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 17:43:34 GMT"
}
] | 1,477,353,600,000 | [
[
"Alaa",
"Ahmed M.",
""
],
[
"van der Schaar",
"Mihaela",
""
]
] |
1610.07862 | Shoumen Datta | Shoumen Palit Austin Datta | Intelligence in Artificial Intelligence | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The elusive quest for intelligence in artificial intelligence prompts us to
consider that instituting human-level intelligence in systems may be (still) in
the realm of utopia. In about a quarter century, we have witnessed the winter
of AI (1990) being transformed and transported to the zenith of tabloid fodder
about AI (2015). The discussion at hand is about the elements that constitute
the canonical idea of intelligence. The delivery of intelligence as a
pay-per-use-service, popping out of an app or from a shrink-wrapped software
defined point solution, is in contrast to the bio-inspired view of intelligence
as an outcome, perhaps formed from a tapestry of events, cross-pollinated by
instances, each with its own microcosm of experiences and learning, which may
not be discrete all-or-none functions but continuous, over space and time. The
enterprise world may not require, aspire or desire such an engaged solution to
improve its services for enabling digital transformation through the deployment
of digital twins, for example. One might ask whether the "work-flow on
steroids" version of decision support may suffice for intelligence? Are we
harking back to the era of rule based expert systems? The image conjured by the
publicity machines offers deep solutions with human-level AI and preposterous
claims about capturing the "brain in a box" by 2020. Even emulating insects may
be difficult in terms of real progress. Perhaps we can try to focus on worms
(Caenorhabditis elegans) which may be better suited for what business needs to
quench its thirst for so-called intelligence in AI.
| [
{
"version": "v1",
"created": "Mon, 24 Oct 2016 02:15:46 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Oct 2016 02:32:30 GMT"
}
] | 1,477,526,400,000 | [
[
"Datta",
"Shoumen Palit Austin",
""
]
] |
1610.07989 | Raji Ghawi | Raji Ghawi | Process Discovery using Inductive Miner and Decomposition | A Submission to the Process Discovery Contest @ BPM2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report presents a submission to the Process Discovery Contest. The
contest is dedicated to the assessment of tools and techniques that discover
business process models from event logs. The objective is to compare the
efficiency of techniques to discover process models that provide a proper
balance between "overfitting" and "underfitting". In the context of the Process
Discovery Contest, process discovery is turned into a classification task with
a training set and a test set; where a process model needs to decide whether
traces are fitting or not. In this report, we first show how we use two
discovery techniques, namely: Inductive Miner and Decomposition, to discover
process models from the training set using ProM tool. Second, we show how we
use replay results to 1) check the rediscoverability of models, and to 2)
classify unseen traces (in test logs) as fitting or not. Then, we discuss the
classification results of validation logs, the complexity of discovered models,
and their impact on the selection of models for submission. The report ends
with the pictures of the submitted process models.
| [
{
"version": "v1",
"created": "Tue, 25 Oct 2016 17:58:54 GMT"
}
] | 1,477,526,400,000 | [
[
"Ghawi",
"Raji",
""
]
] |
1610.08222 | Anuradha Ariyaratne | M. K. A. Ariyaratne, T. G. I. Fernando and S. Weerakoon | A self-tuning Firefly algorithm to tune the parameters of Ant Colony
System (ACSFA) | 18 pages, 21 references, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ant colony system (ACS) is a promising approach which has been widely used in
problems such as Travelling Salesman Problems (TSP), Job shop scheduling
problems (JSP) and Quadratic Assignment problems (QAP). In its original
implementation, parameters of the algorithm were selected by trial and error
approach. Over the last few years, novel approaches have been proposed on
adapting the parameters of ACS in improving its performance. The aim of this
paper is to use a framework introduced for self-tuning optimization algorithms
combined with the firefly algorithm (FA) to tune the parameters of the ACS
solving symmetric TSP problems. The FA optimizes the problem specific
parameters of ACS while the parameters of the FA are tuned by the selected
framework itself. With this approach, the user neither has to work with the
parameters of ACS nor the parameters of FA. Using common symmetric TSP problems
we demonstrate that the framework fits well for the ACS. A detailed statistical
analysis further verifies the goodness of the new ACS over the existing ACS and
also of the other techniques used to tune the parameters of ACS.
| [
{
"version": "v1",
"created": "Wed, 26 Oct 2016 08:01:27 GMT"
}
] | 1,477,526,400,000 | [
[
"Ariyaratne",
"M. K. A.",
""
],
[
"Fernando",
"T. G. I.",
""
],
[
"Weerakoon",
"S.",
""
]
] |
1610.08602 | Iuliia Kotseruba | Iuliia Kotseruba, John K. Tsotsos | A Review of 40 Years of Cognitive Architecture Research: Core Cognitive
Abilities and Practical Applications | 74 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a broad overview of the last 40 years of research on
cognitive architectures. Although the number of existing architectures is
nearing several hundred, most of the existing surveys do not reflect this
growth and focus on a handful of well-established architectures. Thus, in this
survey we wanted to shift the focus towards a more inclusive and high-level
overview of the research on cognitive architectures. Our final set of 84
architectures includes 49 that are still actively developed, and borrow from a
diverse set of disciplines, spanning areas from psychoanalysis to neuroscience.
To keep the length of this paper within reasonable limits we discuss only the
core cognitive abilities, such as perception, attention mechanisms, action
selection, memory, learning and reasoning. In order to assess the breadth of
practical applications of cognitive architectures we gathered information on
over 900 practical projects implemented using the cognitive architectures in
our list. We use various visualization techniques to highlight overall trends
in the development of the field. In addition to summarizing the current
state-of-the-art in the cognitive architecture research, this survey describes
a variety of methods and ideas that have been tried and their relative success
in modeling human cognitive abilities, as well as which aspects of cognitive
behavior need more research with respect to their mechanistic counterparts and
thus can further inform how cognitive science might progress.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2016 03:48:33 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Sep 2017 02:58:54 GMT"
},
{
"version": "v3",
"created": "Sat, 13 Jan 2018 21:00:14 GMT"
}
] | 1,516,060,800,000 | [
[
"Kotseruba",
"Iuliia",
""
],
[
"Tsotsos",
"John K.",
""
]
] |
1610.08640 | Marc Schoenauer | Marti Luis (TAO, LRI), Fansi-Tchango Arsene (TRT), Navarro Laurent
(TRT), Marc Schoenauer (TAO, LRI) | Anomaly Detection with the Voronoi Diagram Evolutionary Algorithm | null | Parallel Problem Solving from Nature -- PPSN XIV, Sep 2016,
Edinburgh, France. Springer Verlag, 9921, pp.697-706, 2016, LNCS | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the Voronoi diagram-based evolutionary algorithm
(VorEAl). VorEAl partitions input space in abnormal/normal subsets using
Voronoi diagrams. Diagrams are evolved using a multi-objective bio-inspired
approach in order to conjointly optimize classification metrics while also
being able to represent areas of the data space that are not present in the
training dataset. As part of the paper VorEAl is experimentally validated and
contrasted with similar approaches.
| [
{
"version": "v1",
"created": "Thu, 27 Oct 2016 07:05:54 GMT"
}
] | 1,477,612,800,000 | [
[
"Luis",
"Marti",
"",
"TAO, LRI"
],
[
"Arsene",
"Fansi-Tchango",
"",
"TRT"
],
[
"Laurent",
"Navarro",
"",
"TRT"
],
[
"Schoenauer",
"Marc",
"",
"TAO, LRI"
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.