id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1506.01864 | Konstantin Yakovlev S | Konstantin Yakovlev, Egor Baskin, Ivan Hramoin | Grid-based angle-constrained path planning | 13 pages (12 pages: main text, 1 page: references), 7 figures, 20
references, submitted 2015-June-22 to "The 38 German Conference on Artificial
Intelligence" (KI-2015) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Square grids are commonly used in robotics and game development as spatial
models and well known in AI community heuristic search algorithms (such as A*,
JPS, Theta* etc.) are widely used for path planning on grids. A lot of research
is concentrated on finding the shortest (in geometrical sense) paths while in
many applications finding smooth paths (rather than the shortest ones but
containing sharp turns) is preferable. In this paper we study the problem of
generating smooth paths and concentrate on angle constrained path planning. We
put angle-constrained path planning problem formally and present a new
algorithm tailored to solve it - LIAN. We examine LIAN both theoretically and
empirically. We show that it is sound and complete (under some restrictions).
We also show that LIAN outperforms the analogues when solving numerous path
planning tasks within urban outdoor navigation scenarios.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 11:09:23 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Aug 2015 15:59:28 GMT"
}
] | 1,440,547,200,000 | [
[
"Yakovlev",
"Konstantin",
""
],
[
"Baskin",
"Egor",
""
],
[
"Hramoin",
"Ivan",
""
]
] |
1506.02060 | Vasile Patrascu | Vasile Patrascu | Similarity, Cardinality and Entropy for Bipolar Fuzzy Set in the
Framework of Penta-valued Representation | 6 pages. Submitted to journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper one presents new similarity, cardinality and entropy measures
for bipolar fuzzy set and for its particular forms like intuitionistic,
paraconsistent and fuzzy set. All these are constructed in the framework of
multi-valued representations and are based on a penta-valued logic that uses
the following logical values: true, false, unknown, contradictory and
ambiguous. Also a new distance for bounded real interval was defined.
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 08:56:02 GMT"
}
] | 1,433,808,000,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1506.02061 | Vasile Patrascu | Vasile Patrascu | Entropy and Syntropy in the Context of Five-Valued Logics | 9 pages. Submitted to journal | null | null | null | cs.AI | http://creativecommons.org/licenses/by/3.0/ | This paper presents a five-valued representation of bifuzzy sets. This
representation is related to a five-valued logic that uses the following
values: true, false, inconsistent, incomplete and ambiguous. In the framework
of five-valued representation, formulae for similarity, entropy and syntropy of
bifuzzy sets are constructed.
| [
{
"version": "v1",
"created": "Thu, 26 Feb 2015 09:36:49 GMT"
}
] | 1,433,808,000,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1506.02082 | Philip Baback Alipour | Philip B. Alipour, Matteus Magnusson, Martin W. Olsson, Nooshin H.
Ghasemi, Lawrence Henesey | A Real-time Cargo Damage Management System via a Sorting Array
Triangulation Technique | This article is a report on a developed IDSS system/prototype aimed
to be published for a journal conference proceedings and/or a full paper
under Computer Science and Software Engineering categories. 28 pages; 10
Figures including graphs; 5 tables; presentation file is available at
http://web.uvic.ca/~phibal12/Presentations/IDSS_proj.pptx Ask authors for
full code and/or other files | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This report covers an intelligent decision support system (IDSS), which
handles an efficient and effective way to rapidly inspect containerized cargos
for defection. Defection is either cargo exposure to radiation, physical
damages such as holes, punctured surfaces, iron surface oxidation, etc. The
system uses a sorting array triangulation technique (SAT) and surface damage
detection (SDD) to conduct the inspection. This new technique saves time and
money on finding damaged goods during transportation such that, instead of
running $n$ inspections on $n$ containers, only 3 inspections per triangulation
or a ratio of $3:n$ is required, assuming $n > 3$ containers. The damaged stack
in the array is virtually detected contiguous to an actually-damaged cargo by
calculating nearby distances of such cargos, delivering reliable estimates for
the whole local stack population. The estimated values on damaged, somewhat
damaged and undamaged cargo stacks, are listed and profiled after being sorted
by the program, thereby submitted to the manager for a final decision. The
report describes the problem domain and the implementation of the simulator
prototype, showing how the system operates via software, hardware with/without
human agents, conducting real-time inspections and management per se.
| [
{
"version": "v1",
"created": "Fri, 5 Jun 2015 22:56:18 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jun 2015 20:49:46 GMT"
}
] | 1,434,412,800,000 | [
[
"Alipour",
"Philip B.",
""
],
[
"Magnusson",
"Matteus",
""
],
[
"Olsson",
"Martin W.",
""
],
[
"Ghasemi",
"Nooshin H.",
""
],
[
"Henesey",
"Lawrence",
""
]
] |
1506.02561 | Lakhdar Sais | Said Jabbour and Lakhdar Sais and Yakoub Salhi | On SAT Models Enumeration in Itemset Mining | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Frequent itemset mining is an essential part of data analysis and data
mining. Recent works propose interesting SAT-based encodings for the problem of
discovering frequent itemsets. Our aim in this work is to define strategies for
adapting SAT solvers to such encodings in order to improve models enumeration.
In this context, we deeply study the effects of restart, branching heuristics
and clauses learning. We then conduct an experimental evaluation on SAT-Based
itemset mining instances to show how SAT solvers can be adapted to obtain an
efficient SAT model enumerator.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 15:50:57 GMT"
}
] | 1,433,808,000,000 | [
[
"Jabbour",
"Said",
""
],
[
"Sais",
"Lakhdar",
""
],
[
"Salhi",
"Yakoub",
""
]
] |
1506.02639 | Paul Beame | Paul Beame and Vincent Liew | New Limits for Knowledge Compilation and Applications to Exact Model
Counting | Full version of paper appearing UAI 2015 updated to include new
references to related work | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show new limits on the efficiency of using current techniques to make
exact probabilistic inference for large classes of natural problems. In
particular we show new lower bounds on knowledge compilation to SDD and DNNF
forms. We give strong lower bounds on the complexity of SDD representations by
relating SDD size to best-partition communication complexity. We use this
relationship to prove exponential lower bounds on the SDD size for representing
a large class of problems that occur naturally as queries over probabilistic
databases. A consequence is that for representing unions of conjunctive
queries, SDDs are not qualitatively more concise than OBDDs. We also derive
simple examples for which SDDs must be exponentially less concise than FBDDs.
Finally, we derive exponential lower bounds on the sizes of DNNF
representations using a new quasipolynomial simulation of DNNFs by
nondeterministic FBDDs.
| [
{
"version": "v1",
"created": "Mon, 8 Jun 2015 19:52:43 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Aug 2015 19:13:38 GMT"
}
] | 1,440,028,800,000 | [
[
"Beame",
"Paul",
""
],
[
"Liew",
"Vincent",
""
]
] |
1506.02930 | Frantisek Duris | Frantisek Duris | Arguments for the Effectiveness of Human Problem Solving | null | null | 10.1016/j.bica.2018.04.007 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The question of how humans solve problem has been addressed extensively.
However, the direct study of the effectiveness of this process seems to be
overlooked. In this paper, we address the issue of the effectiveness of human
problem solving: we analyze where this effectiveness comes from and what
cognitive mechanisms or heuristics are involved. Our results are based on the
optimal probabilistic problem solving strategy that appeared in Solomonoff
paper on general problem solving system. We provide arguments that a certain
set of cognitive mechanisms or heuristics drive human problem solving in the
similar manner as the optimal Solomonoff strategy. The results presented in
this paper can serve both cognitive psychology in better understanding of human
problem solving processes as well as artificial intelligence in designing more
human-like agents.
| [
{
"version": "v1",
"created": "Tue, 9 Jun 2015 14:28:12 GMT"
},
{
"version": "v2",
"created": "Tue, 19 Sep 2017 13:31:27 GMT"
}
] | 1,524,700,800,000 | [
[
"Duris",
"Frantisek",
""
]
] |
1506.03140 | Keenon Werling | Keenon Werling, Arun Chaganty, Percy Liang, Chris Manning | On-the-Job Learning with Bayesian Decision Theory | As appearing in NIPS 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our goal is to deploy a high-accuracy system starting with zero training
examples. We consider an "on-the-job" setting, where as inputs arrive, we use
real-time crowdsourcing to resolve uncertainty where needed and output our
prediction when confident. As the model improves over time, the reliance on
crowdsourcing queries decreases. We cast our setting as a stochastic game based
on Bayesian decision theory, which allows us to balance latency, cost, and
accuracy objectives in a principled way. Computing the optimal policy is
intractable, so we develop an approximation based on Monte Carlo Tree Search.
We tested our approach on three datasets---named-entity recognition, sentiment
classification, and image classification. On the NER task we obtained more than
an order of magnitude reduction in cost compared to full human annotation,
while boosting performance relative to the expert provided labels. We also
achieve a 8% F1 improvement over having a single human label the whole set, and
a 28% F1 improvement over online learning.
| [
{
"version": "v1",
"created": "Wed, 10 Jun 2015 00:40:34 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Dec 2015 21:44:07 GMT"
}
] | 1,449,619,200,000 | [
[
"Werling",
"Keenon",
""
],
[
"Chaganty",
"Arun",
""
],
[
"Liang",
"Percy",
""
],
[
"Manning",
"Chris",
""
]
] |
1506.03624 | Daniel J Mankowitz | Daniel J. Mankowitz, Timothy A. Mann, Shie Mannor | Bootstrapping Skills | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The monolithic approach to policy representation in Markov Decision Processes
(MDPs) looks for a single policy that can be represented as a function from
states to actions. For the monolithic approach to succeed (and this is not
always possible), a complex feature representation is often necessary since the
policy is a complex object that has to prescribe what actions to take all over
the state space. This is especially true in large domains with complicated
dynamics. It is also computationally inefficient to both learn and plan in MDPs
using a complex monolithic approach. We present a different approach where we
restrict the policy space to policies that can be represented as combinations
of simpler, parameterized skills---a type of temporally extended action, with a
simple policy representation. We introduce Learning Skills via Bootstrapping
(LSB) that can use a broad family of Reinforcement Learning (RL) algorithms as
a "black box" to iteratively learn parametrized skills. Initially, the learned
skills are short-sighted but each iteration of the algorithm allows the skills
to bootstrap off one another, improving each skill in the process. We prove
that this bootstrapping process returns a near-optimal policy. Furthermore, our
experiments demonstrate that LSB can solve MDPs that, given the same
representational power, could not be solved by a monolithic approach. Thus,
planning with learned skills results in better policies without requiring
complex policy representations.
| [
{
"version": "v1",
"created": "Thu, 11 Jun 2015 11:06:40 GMT"
}
] | 1,434,067,200,000 | [
[
"Mankowitz",
"Daniel J.",
""
],
[
"Mann",
"Timothy A.",
""
],
[
"Mannor",
"Shie",
""
]
] |
1506.03879 | Ji Xu | Ji Xu, Guoyin Wang | Leading Tree in DPCLUS and Its Impact on Building Hierarchies | 11 Pages, 5 figures. It is a very fundamental topic with respect to
the research (clustering by fast search and find of density peaks) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reveals the tree structure as an intermediate result of clustering
by fast search and find of density peaks (DPCLUS), and explores the power of
using this tree to perform hierarchical clustering. The array used to hold the
index of the nearest higher-densitied object for each object can be transformed
into a Leading Tree (LT), in which each parent node P leads its child nodes to
join the same cluster as P itself, and the child nodes are sorted by their
gamma values in descendant order to accelerate the disconnecting of root in
each subtree. There are two major advantages with the LT: One is dramatically
reducing the running time of assigning noncenter data points to their cluster
ID, because the assigning process is turned into just disconnecting the links
from each center to its parent. The other is that the tree model for
representing clusters is more informative. Because we can check which objects
are more likely to be selected as centers in finer grained clustering, or which
objects reach to its center via less jumps. Experiment results and analysis
show the effectiveness and efficiency of the assigning process with an LT.
| [
{
"version": "v1",
"created": "Fri, 12 Jun 2015 00:37:54 GMT"
},
{
"version": "v2",
"created": "Mon, 15 Jun 2015 00:38:53 GMT"
}
] | 1,434,412,800,000 | [
[
"Xu",
"Ji",
""
],
[
"Wang",
"Guoyin",
""
]
] |
1506.04272 | Fuan Pu | Fuan Pu, Jian Luo, Yulai Zhang, and Guiming Luo | Attacker and Defender Counting Approach for Abstract Argumentation | 7 pages, 2 figures;conference CogSci 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Dung's abstract argumentation, arguments are either acceptable or
unacceptable, given a chosen notion of acceptability. This gives a coarse way
to compare arguments. In this paper, we propose a counting approach for a more
fine-gained assessment to arguments by counting the number of their respective
attackers and defenders based on argument graph and argument game. An argument
is more acceptable if the proponent puts forward more number of defenders for
it and the opponent puts forward less number of attackers against it. We show
that our counting model has two well-behaved properties: normalization and
convergence. Then, we define a counting semantics based on this model, and
investigate some general properties of the semantics.
| [
{
"version": "v1",
"created": "Sat, 13 Jun 2015 14:24:51 GMT"
}
] | 1,437,436,800,000 | [
[
"Pu",
"Fuan",
""
],
[
"Luo",
"Jian",
""
],
[
"Zhang",
"Yulai",
""
],
[
"Luo",
"Guiming",
""
]
] |
1506.04366 | Arthur Franz | Arthur Franz | Artificial general intelligence through recursive data compression and
grounded reasoning: a position paper | 27 pages, 3 figures, position paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a tentative outline for the construction of an
artificial, generally intelligent system (AGI). It is argued that building a
general data compression algorithm solving all problems up to a complexity
threshold should be the main thrust of research. A measure for partial progress
in AGI is suggested. Although the details are far from being clear, some
general properties for a general compression algorithm are fleshed out. Its
inductive bias should be flexible and adapt to the input data while constantly
searching for a simple, orthogonal and complete set of hypotheses explaining
the data. It should recursively reduce the size of its representations thereby
compressing the data increasingly at every iteration.
Abstract Based on that fundamental ability, a grounded reasoning system is
proposed. It is argued how grounding and flexible feature bases made of
hypotheses allow for resourceful thinking. While the simulation of
representation contents on the mental stage accounts for much of the power of
propositional logic, compression leads to simple sets of hypotheses that allow
the detection and verification of universally quantified statements.
Abstract Together, it is highlighted how general compression and grounded
reasoning could account for the birth and growth of first concepts about the
world and the commonsense reasoning about them.
| [
{
"version": "v1",
"created": "Sun, 14 Jun 2015 09:29:11 GMT"
}
] | 1,434,412,800,000 | [
[
"Franz",
"Arthur",
""
]
] |
1506.04956 | Ernest Davis | Ernest Davis and Gary Marcus | The Scope and Limits of Simulation in Cognitive Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been proposed that human physical reasoning consists largely of
running "physics engines in the head" in which the future trajectory of the
physical system under consideration is computed precisely using accurate
scientific theories. In such models, uncertainty and incomplete knowledge is
dealt with by sampling probabilistically over the space of possible
trajectories ("Monte Carlo simulation"). We argue that such simulation-based
models are too weak, in that there are many important aspects of human physical
reasoning that cannot be carried out this way, or can only be carried out very
inefficiently; and too strong, in that humans make large systematic errors that
the models cannot account for. We conclude that simulation-based reasoning
makes up at most a small part of a larger system that encompasses a wide range
of additional cognitive processes.
| [
{
"version": "v1",
"created": "Tue, 16 Jun 2015 13:14:26 GMT"
}
] | 1,434,499,200,000 | [
[
"Davis",
"Ernest",
""
],
[
"Marcus",
"Gary",
""
]
] |
1506.05969 | Fary Diallo | Papa Fary Diallo (WIMMICS), Olivier Corby (WIMMICS), Isabelle Mirbel
(WIMMICS), Moussa Lo, Seydina M. Ndiaye | HuTO: an Human Time Ontology for Semantic Web Applications | in French. Ing{\'e}nierie des Connaissances 2015, Jul 2015, Rennes,
France. Association Fran\c{c}aise pour Intelligence Artificielle (AFIA) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The temporal phenomena have many facets that are studied by different
communities. In Semantic Web, large heterogeneous data are handled and
produced. These data often have informal, semi-formal or formal temporal
information which must be interpreted by software agents. In this paper we
present Human Time Ontology (HuTO) an RDFS ontology to annotate and represent
temporal data. A major contribution of HuTO is the modeling of non-convex
intervals giving the ability to write queries for this kind of interval. HuTO
also incorporates normalization and reasoning rules to explicit certain
information. HuTO also proposes an approach which associates a temporal
dimension to the knowledge base content. This facilitates information retrieval
by considering or not the temporal aspect.
| [
{
"version": "v1",
"created": "Fri, 19 Jun 2015 12:08:39 GMT"
}
] | 1,434,931,200,000 | [
[
"Diallo",
"Papa Fary",
"",
"WIMMICS"
],
[
"Corby",
"Olivier",
"",
"WIMMICS"
],
[
"Mirbel",
"Isabelle",
"",
"WIMMICS"
],
[
"Lo",
"Moussa",
""
],
[
"Ndiaye",
"Seydina M.",
""
]
] |
1506.07359 | Jan Leike | Tom Everitt and Jan Leike and Marcus Hutter | Sequential Extensions of Causal and Evidential Decision Theory | ADT 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Moving beyond the dualistic view in AI where agent and environment are
separated incurs new challenges for decision making, as calculation of expected
utility is no longer straightforward. The non-dualistic decision theory
literature is split between causal decision theory and evidential decision
theory. We extend these decision algorithms to the sequential setting where the
agent alternates between taking actions and observing their consequences. We
find that evidential decision theory has two natural extensions while causal
decision theory only has one.
| [
{
"version": "v1",
"created": "Wed, 24 Jun 2015 13:16:16 GMT"
}
] | 1,435,190,400,000 | [
[
"Everitt",
"Tom",
""
],
[
"Leike",
"Jan",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1506.08813 | Anthony Young | Anthony P. Young, Sanjay Modgil, Odinaldo Rodrigues | Argumentation Semantics for Prioritised Default Logic | 46 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We endow prioritised default logic (PDL) with argumentation semantics using
the ASPIC+ framework for structured argumentation, and prove that the
conclusions of the justified arguments are exactly the prioritised default
extensions. Argumentation semantics for PDL will allow for the application of
argument game proof theories to the process of inference in PDL, making the
reasons for accepting a conclusion transparent and the inference process more
intuitive. This also opens up the possibility for argumentation-based
distributed reasoning and communication amongst agents with PDL representations
of mental attitudes.
| [
{
"version": "v1",
"created": "Fri, 26 Jun 2015 21:53:54 GMT"
},
{
"version": "v2",
"created": "Wed, 1 Jul 2015 11:01:17 GMT"
}
] | 1,435,795,200,000 | [
[
"Young",
"Anthony P.",
""
],
[
"Modgil",
"Sanjay",
""
],
[
"Rodrigues",
"Odinaldo",
""
]
] |
1506.08919 | Nicolas Schwind | Nicolas Schwind, Katsumi Inoue | Characterization of Logic Program Revision as an Extension of
Propositional Revision | 42 pages, 5 figures, to appear in Theory and Practice of Logic
Programming (accepted in June 2015) | Theory and Practice of Logic Programming 16 (2016) 111-138 | 10.1017/S1471068415000101 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of belief revision of logic programs, i.e., how to
incorporate to a logic program P a new logic program Q. Based on the structure
of SE interpretations, Delgrande et al. adapted the well-known AGM framework to
logic program (LP) revision. They identified the rational behavior of LP
revision and introduced some specific operators. In this paper, a constructive
characterization of all rational LP revision operators is given in terms of
orderings over propositional interpretations with some further conditions
specific to SE interpretations. It provides an intuitive, complete procedure
for the construction of all rational LP revision operators and makes easier the
comprehension of their semantic and computational properties. We give a
particular consideration to logic programs of very general form, i.e., the
generalized logic programs (GLPs). We show that every rational GLP revision
operator is derived from a propositional revision operator satisfying the
original AGM postulates. Interestingly, the further conditions specific to GLP
revision are independent from the propositional revision operator on which a
GLP revision operator is based. Taking advantage of our characterization
result, we embed the GLP revision operators into structures of Boolean
lattices, that allow us to bring to light some potential weaknesses in the
adapted AGM postulates. To illustrate our claim, we introduce and characterize
axiomatically two specific classes of (rational) GLP revision operators which
arguably have a drastic behavior. We additionally consider two more restricted
forms of logic programs, i.e., the disjunctive logic programs (DLPs) and the
normal logic programs (NLPs) and adapt our characterization result to DLP and
NLP revision operators.
| [
{
"version": "v1",
"created": "Tue, 30 Jun 2015 02:09:02 GMT"
}
] | 1,582,070,400,000 | [
[
"Schwind",
"Nicolas",
""
],
[
"Inoue",
"Katsumi",
""
]
] |
1507.00142 | Cunjing Ge | Cunjing Ge, Feifei Ma and Jian Zhang | A Tool for Computing and Estimating the Volume of the Solution Space of
SMT(LA) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are already quite a few tools for solving the Satisfiability Modulo
Theories (SMT) problems. In this paper, we present \texttt{VolCE}, a tool for
counting the solutions of SMT constraints, or in other words, for computing the
volume of the solution space. Its input is essentially a set of Boolean
combinations of linear constraints, where the numeric variables are either all
integers or all reals, and each variable is bounded. The tool extends SMT
solving with integer solution counting and volume computation/estimation for
convex polytopes. Effective heuristics are adopted, which enable the tool to
deal with high-dimensional problem instances efficiently and accurately.
| [
{
"version": "v1",
"created": "Wed, 1 Jul 2015 08:06:33 GMT"
}
] | 1,435,795,200,000 | [
[
"Ge",
"Cunjing",
""
],
[
"Ma",
"Feifei",
""
],
[
"Zhang",
"Jian",
""
]
] |
1507.00862 | Alexander Semenov | Alexander Semenov and Oleg Zaikin | Using Monte Carlo method for searching partitionings of hard variants of
Boolean satisfiability problem | The reduced version of this paper was accepted for publication in
proceedings of the PaCT 2015 conference (LNCS Vol. 9251). arXiv admin note:
substantial text overlap with arXiv:1411.5433 | LNCS 9251 (2015) 222-230 | 10.1007/978-3-319-21909-7_21 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose the approach for constructing partitionings of hard
variants of the Boolean satisfiability problem (SAT). Such partitionings can be
used for solving corresponding SAT instances in parallel. For the same SAT
instance one can construct different partitionings, each of them is a set of
simplified versions of the original SAT instance. The effectiveness of an
arbitrary partitioning is determined by the total time of solving of all SAT
instances from it. We suggest the approach, based on the Monte Carlo method,
for estimating time of processing of an arbitrary partitioning. With each
partitioning we associate a point in the special finite search space. The
estimation of effectiveness of the particular partitioning is the value of
predictive function in the corresponding point of this space. The problem of
search for an effective partitioning can be formulated as a problem of
optimization of the predictive function. We use metaheuristic algorithms
(simulated annealing and tabu search) to move from point to point in the search
space. In our computational experiments we found partitionings for SAT
instances encoding problems of inversion of some cryptographic functions.
Several of these SAT instances with realistic predicted solving time were
successfully solved on a computing cluster and in the volunteer computing
project SAT@home. The solving time agrees well with estimations obtained by the
proposed method.
| [
{
"version": "v1",
"created": "Fri, 3 Jul 2015 10:18:01 GMT"
}
] | 1,445,558,400,000 | [
[
"Semenov",
"Alexander",
""
],
[
"Zaikin",
"Oleg",
""
]
] |
1507.01384 | Christopher A. Tucker | Christopher A. Tucker | The method of artificial systems | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This document is written with the intention to describe in detail a method
and means by which a computer program can reason about the world and in so
doing, increase its analogue to a living system. As the literature is rife and
it is apparent we, as scientists and engineers, have not found the solution,
this document will attempt the solution by grounding its intellectual arguments
within tenets of human cognition in Western philosophy. The result will be a
characteristic description of a method to describe an artificial system
analogous to that performed for a human. The approach was the substance of my
Master's thesis, explored more deeply during the course of my postdoc research.
It focuses primarily on context awareness and choice set within a boundary of
available epistemology, which serves to describe it. Expanded upon, such a
description strives to discover agreement with Kant's critique of reason to
understand how it could be applied to define the architecture of its design.
The intention has never been to mimic human or biological systems, rather, to
understand the profoundly fundamental rules, when leveraged correctly, results
in an artificial consciousness as noumenon while in keeping with the perception
of it as phenomenon.
| [
{
"version": "v1",
"created": "Mon, 6 Jul 2015 10:52:08 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2017 13:37:02 GMT"
}
] | 1,495,497,600,000 | [
[
"Tucker",
"Christopher A.",
""
]
] |
1507.01425 | Ryuta Arisaka | Ryuta Arisaka | Latent Belief Theory and Belief Dependencies: A Solution to the Recovery
Problem in the Belief Set Theories | Corrected the following: 1. in Definition 1, earlier versions had
2^Props x 2^Props x N, but clearly it should be 2^{Props x Props x N}. 2. in
Definition 1, one disjunctive case was missing. The 5th item is newly added
to complete. 3. On page 3, in the right column, the 2nd axiom for Compactness
has a typo. It is not P \in X, but should be P \in L(X) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The AGM recovery postulate says: assume a set of propositions X; assume that
it is consistent and that it is closed under logical consequences; remove a
belief P from the set minimally, but make sure that the resultant set is again
some set of propositions X' which is closed under the logical consequences; now
add P again and close the set under the logical consequences; and we should get
a set of propositions that contains all the propositions that were in X. This
postulate has since met objections; many have observed that it could bear
counter-intuitive results. Nevertheless, the attempts that have been made so
far to amend it either recovered the postulate in full, had to relinquish the
assumption of the logical closure altogether, or else had to introduce fresh
controversies of their own. We provide a solution to the recovery paradox in
this work. Our theoretical basis is the recently proposed belief theory with
latent beliefs (simply the latent belief theory for short). Firstly, through
examples, we will illustrate that the vanilla latent belief theory can be made
more expressive. We will identify that a latent belief, when it becomes
visible, may remain visible only while the beliefs that triggered it into the
agent's consciousness are in the agent's belief set. In order that such
situations can be also handled, we will enrich the latent belief theory with
belief dependencies among attributive beliefs, recording the information as to
which belief is supported of its existence by which beliefs. We will show that
the enriched latent belief theory does not possess the recovery property. The
closure by logical consequences is maintained in the theory, however. Hence it
serves as a solution to the open problem in the belief set theories.
| [
{
"version": "v1",
"created": "Mon, 6 Jul 2015 12:48:59 GMT"
},
{
"version": "v2",
"created": "Wed, 8 Jul 2015 16:59:42 GMT"
},
{
"version": "v3",
"created": "Tue, 8 Sep 2015 04:13:51 GMT"
},
{
"version": "v4",
"created": "Wed, 27 Jan 2016 03:03:43 GMT"
}
] | 1,453,939,200,000 | [
[
"Arisaka",
"Ryuta",
""
]
] |
1507.01986 | Nathaniel Soares | Nate Soares and Benja Fallenstein | Toward Idealized Decision Theory | This is an extended version of a paper accepted to AGI-2015 | null | null | 2014-7 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper motivates the study of decision theory as necessary for aligning
smarter-than-human artificial systems with human interests. We discuss the
shortcomings of two standard formulations of decision theory, and demonstrate
that they cannot be used to describe an idealized decision procedure suitable
for approximation by artificial systems. We then explore the notions of policy
selection and logical counterfactuals, two recent insights into decision theory
that point the way toward promising paths for future research.
| [
{
"version": "v1",
"created": "Tue, 7 Jul 2015 23:06:59 GMT"
}
] | 1,436,400,000,000 | [
[
"Soares",
"Nate",
""
],
[
"Fallenstein",
"Benja",
""
]
] |
1507.02456 | Melisachew Wudage Chekol | Melisachew Wudage Chekol and Jakob Huber and Heiner Stuckenschmidt | Towards Log-Linear Logics with Concrete Domains | StarAI2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present $\mathcal{MEL}^{++}$ (M denotes Markov logic networks) an
extension of the log-linear description logics $\mathcal{EL}^{++}$-LL with
concrete domains, nominals, and instances. We use Markov logic networks (MLNs)
in order to find the most probable, classified and coherent $\mathcal{EL}^{++}$
ontology from an $\mathcal{MEL}^{++}$ knowledge base. In particular, we develop
a novel way to deal with concrete domains (also known as datatypes) by
extending MLN's cutting plane inference (CPI) algorithm.
| [
{
"version": "v1",
"created": "Thu, 9 Jul 2015 11:02:38 GMT"
},
{
"version": "v2",
"created": "Wed, 15 Jul 2015 08:29:23 GMT"
}
] | 1,437,004,800,000 | [
[
"Chekol",
"Melisachew Wudage",
""
],
[
"Huber",
"Jakob",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
1507.02873 | Joris Renkens | Joris Renkens and Angelika Kimmig and Luc De Raedt | Lazy Explanation-Based Approximation for Probabilistic Logic Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a lazy approach to the explanation-based approximation of
probabilistic logic programs. It uses only the most significant part of the
program when searching for explanations. The result is a fast and anytime
approximate inference algorithm which returns hard lower and upper bounds on
the exact probability. We experimentally show that this method outperforms
state-of-the-art approximate inference.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2015 12:29:47 GMT"
}
] | 1,436,745,600,000 | [
[
"Renkens",
"Joris",
""
],
[
"Kimmig",
"Angelika",
""
],
[
"De Raedt",
"Luc",
""
]
] |
1507.02912 | James Cussens | James Cussens | First-order integer programming for MAP problems | corrected typos | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding the most probable (MAP) model in SRL frameworks such as Markov logic
and Problog can, in principle, be solved by encoding the problem as a
`grounded-out' mixed integer program (MIP). However, useful first-order
structure disappears in this process motivating the development of first-order
MIP approaches. Here we present mfoilp, one such approach. Since the syntax and
semantics of mfoilp is essentially the same as existing approaches we focus
here mainly on implementation and algorithmic issues. We start with the
(conceptually) simple problem of using a logic program to generate a MIP
instance before considering more ambitious exploitation of first-order
representations.
| [
{
"version": "v1",
"created": "Fri, 10 Jul 2015 14:13:31 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jul 2015 05:48:01 GMT"
}
] | 1,436,832,000,000 | [
[
"Cussens",
"James",
""
]
] |
1507.03097 | Shangpu Jiang | Shangpu Jiang, Daniel Lowd, Dejing Dou | Ontology Matching with Knowledge Rules | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology matching is the process of automatically determining the semantic
equivalences between the concepts of two ontologies. Most ontology matching
algorithms are based on two types of strategies: terminology-based strategies,
which align concepts based on their names or descriptions, and structure-based
strategies, which exploit concept hierarchies to find the alignment. In many
domains, there is additional information about the relationships of concepts
represented in various ways, such as Bayesian networks, decision trees, and
association rules. We propose to use the similarities between these
relationships to find more accurate alignments. We accomplish this by defining
soft constraints that prefer alignments where corresponding concepts have the
same local relationships encoded as knowledge rules. We use a probabilistic
framework to integrate this new knowledge-based strategy with standard
terminology-based and structure-based strategies. Furthermore, our method is
particularly effective in identifying correspondences between complex concepts.
Our method achieves substantially better F-score than the previous
state-of-the-art on three ontology matching domains.
| [
{
"version": "v1",
"created": "Sat, 11 Jul 2015 11:19:36 GMT"
}
] | 1,436,832,000,000 | [
[
"Jiang",
"Shangpu",
""
],
[
"Lowd",
"Daniel",
""
],
[
"Dou",
"Dejing",
""
]
] |
1507.03168 | Pablo Robles-Granda | Pablo Robles-Granda and Sebastian Moreno and Jennifer Neville | Using Bayesian Network Representations for Effective Sampling from
Generative Network Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian networks (BNs) are used for inference and sampling by exploiting
conditional independence among random variables. Context specific independence
(CSI) is a property of graphical models where additional independence relations
arise in the context of particular values of random variables (RVs).
Identifying and exploiting CSI properties can simplify inference. Some
generative network models (models that generate social/information network
samples from a network distribution P(G)), with complex interactions among a
set of RVs, can be represented with probabilistic graphical models, in
particular with BNs. In the present work we show one such a case. We discuss
how a mixed Kronecker Product Graph Model can be represented as a BN, and study
its BN properties that can be used for efficient sampling. Specifically, we
show that instead of exhibiting CSI properties, the model has deterministic
context-specific dependence (DCSD). Exploiting this property focuses the
sampling method on a subset of the sampling space that improves efficiency.
| [
{
"version": "v1",
"created": "Sat, 11 Jul 2015 23:10:17 GMT"
}
] | 1,436,832,000,000 | [
[
"Robles-Granda",
"Pablo",
""
],
[
"Moreno",
"Sebastian",
""
],
[
"Neville",
"Jennifer",
""
]
] |
1507.03181 | Shangpu Jiang | Shangpu Jiang, Daniel Lowd, Dejing Dou | A Probabilistic Approach to Knowledge Translation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we focus on a novel knowledge reuse scenario where the
knowledge in the source schema needs to be translated to a semantically
heterogeneous target schema. We refer to this task as "knowledge translation"
(KT). Unlike data translation and transfer learning, KT does not require any
data from the source or target schema. We adopt a probabilistic approach to KT
by representing the knowledge in the source schema, the mapping between the
source and target schemas, and the resulting knowledge in the target schema all
as probability distributions, specially using Markov random fields and Markov
logic networks. Given the source knowledge and mappings, we use standard
learning and inference algorithms for probabilistic graphical models to find an
explicit probability distribution in the target schema that minimizes the
Kullback-Leibler divergence from the implicit distribution. This gives us a
compact probabilistic model that represents knowledge from the source schema as
well as possible, respecting the uncertainty in both the source knowledge and
the mapping. In experiments on both propositional and relational domains, we
find that the knowledge obtained by KT is comparable to other approaches that
require data, demonstrating that knowledge can be reused without data.
| [
{
"version": "v1",
"created": "Sun, 12 Jul 2015 03:24:21 GMT"
}
] | 1,436,832,000,000 | [
[
"Jiang",
"Shangpu",
""
],
[
"Lowd",
"Daniel",
""
],
[
"Dou",
"Dejing",
""
]
] |
1507.03257 | Michael Gr. Voskoglou Prof. Dr. | Michael Voskoglou | Use of the Triangular Fuzzy Numbers for Student Assessment | 9 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In an earlier work we have used the Triangular Fuzzy Numbers (TFNs)as an
assessment tool of student skills.This approach led to an approximate
linguistic characterization of the students' overall performance, but it was
not proved to be sufficient in all cases for comparing the performance of two
different student groups, since tywo TFNs are not always comparable. In the
present paper we complete the above fuzzy assessment approach by presenting a
defuzzification method of TFNS based on the Center of Gravity (COG) technique,
which enables the required comparison. In addition we extend our results by
using the Trapezoidal Fuzzy Numbers (TpFNs) too, which are a generalization of
the TFNs, for student assessment and we present suitable examples illustrating
our new results in practice.
| [
{
"version": "v1",
"created": "Sun, 12 Jul 2015 17:57:50 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Oct 2015 22:27:49 GMT"
}
] | 1,444,780,800,000 | [
[
"Voskoglou",
"Michael",
""
]
] |
1507.03638 | Giuseppe Tommaso Costanzo | Giuseppe Tommaso Costanzo, Sandro Iacovella, Frederik Ruelens, T.
Leurs and Bert Claessens | Experimental analysis of data-driven control for a building heating
system | 12 pages, 8 figures, pending for publication in Elsevier SEGAN | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Driven by the opportunity to harvest the flexibility related to building
climate control for demand response applications, this work presents a
data-driven control approach building upon recent advancements in reinforcement
learning. More specifically, model assisted batch reinforcement learning is
applied to the setting of building climate control subjected to a dynamic
pricing. The underlying sequential decision making problem is cast on a markov
decision problem, after which the control algorithm is detailed. In this work,
fitted Q-iteration is used to construct a policy from a batch of experimental
tuples. In those regions of the state space where the experimental sample
density is low, virtual support samples are added using an artificial neural
network. Finally, the resulting policy is shaped using domain knowledge. The
control approach has been evaluated quantitatively using a simulation and
qualitatively in a living lab. From the quantitative analysis it has been found
that the control approach converges in approximately 20 days to obtain a
control policy with a performance within 90% of the mathematical optimum. The
experimental analysis confirms that within 10 to 20 days sensible policies are
obtained that can be used for different outside temperature regimes.
| [
{
"version": "v1",
"created": "Mon, 13 Jul 2015 22:19:41 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 21:43:03 GMT"
}
] | 1,477,872,000,000 | [
[
"Costanzo",
"Giuseppe Tommaso",
""
],
[
"Iacovella",
"Sandro",
""
],
[
"Ruelens",
"Frederik",
""
],
[
"Leurs",
"T.",
""
],
[
"Claessens",
"Bert",
""
]
] |
1507.03920 | Mario Alviano | Mario Alviano and Rafael Penaloza | Fuzzy Answer Set Computation via Satisfiability Modulo Theories | null | Theory and Practice of Logic Programming 15 (2015) 588-603 | 10.1017/S1471068415000241 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fuzzy answer set programming (FASP) combines two declarative frameworks,
answer set programming and fuzzy logic, in order to model reasoning by default
over imprecise information. Several connectives are available to combine
different expressions; in particular the \Godel and \Luka fuzzy connectives are
usually considered, due to their properties. Although the \Godel conjunction
can be easily eliminated from rule heads, we show through complexity arguments
that such a simplification is infeasible in general for all other connectives.
%, even if bodies are restricted to \Luka or \Godel conjunctions. The paper
analyzes a translation of FASP programs into satisfiability modulo
theories~(SMT), which in general produces quantified formulas because of the
minimality of the semantics. Structural properties of many FASP programs allow
to eliminate the quantification, or to sensibly reduce the number of quantified
variables. Indeed, integrality constraints can replace recursive rules commonly
used to force Boolean interpretations, and completion subformulas can guarantee
minimality for acyclic programs with atomic heads. Moreover, head cycle free
rules can be replaced by shifted subprograms, whose structure depends on the
eliminated head connective, so that ordered completion may replace the
minimality check if also \Luka disjunction in rule bodies is acyclic. The paper
also presents and evaluates a prototype system implementing these translations.
To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of
ICLP 2015.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 16:52:05 GMT"
}
] | 1,582,070,400,000 | [
[
"Alviano",
"Mario",
""
],
[
"Penaloza",
"Rafael",
""
]
] |
1507.03922 | Mario Alviano | Mario Alviano and Nicola Leone | Complexity and Compilation of GZ-Aggregates in Answer Set Programming | null | Theory and Practice of Logic Programming 15 (2015) 574-587 | 10.1017/S147106841500023X | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gelfond and Zhang recently proposed a new stable model semantics based on
Vicious Circle Principle in order to improve the interpretation of logic
programs with aggregates. The paper focuses on this proposal, and analyzes the
complexity of both coherence testing and cautious reasoning under the new
semantics. Some surprising results highlight similarities and differences
versus mainstream stable model semantics for aggregates. Moreover, the paper
reports on the design of compilation techniques for implementing the new
semantics on top of existing ASP solvers, which eventually lead to realize a
prototype system that allows for experimenting with Gelfond-Zhang's aggregates.
To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of
ICLP 2015.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 16:54:36 GMT"
}
] | 1,582,070,400,000 | [
[
"Alviano",
"Mario",
""
],
[
"Leone",
"Nicola",
""
]
] |
1507.03923 | Mario Alviano | Mario Alviano and Wolfgang Faber and Martin Gebser | Rewriting recursive aggregates in answer set programming: back to
monotonicity | null | Theory and Practice of Logic Programming 15 (2015) 559-573 | 10.1017/S1471068415000228 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aggregation functions are widely used in answer set programming for
representing and reasoning on knowledge involving sets of objects collectively.
Current implementations simplify the structure of programs in order to optimize
the overall performance. In particular, aggregates are rewritten into simpler
forms known as monotone aggregates. Since the evaluation of normal programs
with monotone aggregates is in general on a lower complexity level than the
evaluation of normal programs with arbitrary aggregates, any faithful
translation function must introduce disjunction in rule heads in some cases.
However, no function of this kind is known. The paper closes this gap by
introducing a polynomial, faithful, and modular translation for rewriting
common aggregation functions into the simpler form accepted by current solvers.
A prototype system allows for experimenting with arbitrary recursive
aggregates, which are also supported in the recent version 4.5 of the grounder
\textsc{gringo}, using the methods presented in this paper.
To appear in Theory and Practice of Logic Programming (TPLP), Proceedings of
ICLP 2015.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 16:57:33 GMT"
}
] | 1,582,070,400,000 | [
[
"Alviano",
"Mario",
""
],
[
"Faber",
"Wolfgang",
""
],
[
"Gebser",
"Martin",
""
]
] |
1507.03979 | Neng-Fa Zhou | Neng-Fa Zhou, Roman Bartak and Agostino Dovier | Planning as Tabled Logic Programming | 27 pages in TPLP 2015 | Theory and Practice of Logic Programming 15 (2015) 543-558 | 10.1017/S1471068415000216 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes Picat's planner, its implementation, and planning models
for several domains used in International Planning Competition (IPC) 2014.
Picat's planner is implemented by use of tabling. During search, every state
encountered is tabled, and tabled states are used to effectively perform
resource-bounded search. In Picat, structured data can be used to avoid
enumerating all possible permutations of objects, and term sharing is used to
avoid duplication of common state data. This paper presents several modeling
techniques through the example models, ranging from designing state
representations to facilitate data sharing and symmetry breaking, encoding
actions with operations for efficient precondition checking and state updating,
to incorporating domain knowledge and heuristics. Broadly, this paper
demonstrates the effectiveness of tabled logic programming for planning, and
argues the importance of modeling despite recent significant progress in
domain-independent PDDL planners.
| [
{
"version": "v1",
"created": "Tue, 14 Jul 2015 19:41:26 GMT"
}
] | 1,582,070,400,000 | [
[
"Zhou",
"Neng-Fa",
""
],
[
"Bartak",
"Roman",
""
],
[
"Dovier",
"Agostino",
""
]
] |
1507.04091 | Kuang Zhou | Kuang Zhou (DRUID), Arnaud Martin (DRUID), Quan Pan, Zhun-Ga Liu | Evidential relational clustering using medoids | in The 18th International Conference on Information Fusion, July
2015, Washington, DC, USA , Jul 2015, Washington, United States | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In real clustering applications, proximity data, in which only pairwise
similarities or dissimilarities are known, is more general than object data, in
which each pattern is described explicitly by a list of attributes.
Medoid-based clustering algorithms, which assume the prototypes of classes are
objects, are of great value for partitioning relational data sets. In this
paper a new prototype-based clustering method, named Evidential C-Medoids
(ECMdd), which is an extension of Fuzzy C-Medoids (FCMdd) on the theoretical
framework of belief functions is proposed. In ECMdd, medoids are utilized as
the prototypes to represent the detected classes, including specific classes
and imprecise classes. Specific classes are for the data which are distinctly
far from the prototypes of other classes, while imprecise classes accept the
objects that may be close to the prototypes of more than one class. This soft
decision mechanism could make the clustering results more cautious and reduce
the misclassification rates. Experiments in synthetic and real data sets are
used to illustrate the performance of ECMdd. The results show that ECMdd could
capture well the uncertainty in the internal data structure. Moreover, it is
more robust to the initializations compared with FCMdd.
| [
{
"version": "v1",
"created": "Wed, 15 Jul 2015 05:49:43 GMT"
}
] | 1,437,004,800,000 | [
[
"Zhou",
"Kuang",
"",
"DRUID"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
],
[
"Pan",
"Quan",
""
],
[
"Liu",
"Zhun-Ga",
""
]
] |
1507.04630 | Iliana Petrova | Piero Andrea Bonatti and Iliana Mineva Petrova and Luigi Sauro | Optimizing the computation of overriding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce optimization techniques for reasoning in DLN---a recently
introduced family of nonmonotonic description logics whose characterizing
features appear well-suited to model the applicative examples naturally arising
in biomedical domains and semantic web access control policies. Such
optimizations are validated experimentally on large KBs with more than 30K
axioms. Speedups exceed 1 order of magnitude. For the first time, response
times compatible with real-time reasoning are obtained with nonmonotonic KBs of
this size.
| [
{
"version": "v1",
"created": "Thu, 16 Jul 2015 16:05:47 GMT"
}
] | 1,437,091,200,000 | [
[
"Bonatti",
"Piero Andrea",
""
],
[
"Petrova",
"Iliana Mineva",
""
],
[
"Sauro",
"Luigi",
""
]
] |
1507.04928 | Kieran Greer Dr | Kieran Greer | A Brain-like Cognitive Process with Shared Methods | null | Int. J. Advanced Intelligence Paradigms, Vol. 18, No. 4, 2021,
pp.481-501, Inderscience | 10.1504/IJAIP.2018.10033335 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a new entropy-style of equation that may be useful in a
general sense, but can be applied to a cognitive model with related processes.
The model is based on the human brain, with automatic and distributed pattern
activity. Methods for carrying out the different processes are suggested. The
main purpose of this paper is to reaffirm earlier research on different
knowledge-based and experience-based clustering techniques. The overall
architecture has stayed essentially the same and so it is the localised
processes or smaller details that have been updated. For example, a counting
mechanism is used slightly differently, to measure a level of 'cohesion'
instead of a 'correct' classification, over pattern instances. The introduction
of features has further enhanced the architecture and the new entropy-style
equation is proposed. While an earlier paper defined three levels of functional
requirement, this paper re-defines the levels in a more human vernacular, with
higher-level goals described in terms of action-result pairs.
| [
{
"version": "v1",
"created": "Fri, 17 Jul 2015 11:24:07 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Apr 2016 10:06:58 GMT"
},
{
"version": "v3",
"created": "Sat, 23 Jul 2016 16:00:42 GMT"
},
{
"version": "v4",
"created": "Wed, 23 Nov 2016 14:44:04 GMT"
},
{
"version": "v5",
"created": "Tue, 4 Apr 2017 13:46:24 GMT"
}
] | 1,619,136,000,000 | [
[
"Greer",
"Kieran",
""
]
] |
1507.05122 | Feng Lin | Yingxiao Wu, Yan Zhuang, Xi Long, Feng Lin, Wenyao Xu | Human Gender Classification: A Review | This paper has been withdrawn by the author due to several literature
mistakes | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gender contains a wide range of information regarding to the characteristics
difference between male and female. Successful gender recognition is essential
and critical for many applications in the commercial domains such as
applications of human-computer interaction and computer-aided physiological or
psychological analysis. Some have proposed various approaches for automatic
gender classification using the features derived from human bodies and/or
behaviors. First, this paper introduces the challenge and application for
gender classification research. Then, the development and framework of gender
classification are described. Besides, we compare these state-of-the-art
approaches, including vision-based methods, biological information-based
method, and social network information-based method, to provide a comprehensive
review in the area of gender classification. In mean time, we highlight the
strength and discuss the limitation of each method. Finally, this review also
discusses several promising applications for the future work.
| [
{
"version": "v1",
"created": "Fri, 17 Jul 2015 21:58:01 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Mar 2016 14:48:45 GMT"
}
] | 1,458,172,800,000 | [
[
"Wu",
"Yingxiao",
""
],
[
"Zhuang",
"Yan",
""
],
[
"Long",
"Xi",
""
],
[
"Lin",
"Feng",
""
],
[
"Xu",
"Wenyao",
""
]
] |
1507.05268 | Gal Dalal | Gal Dalal, Shie Mannor | Reinforcement Learning for the Unit Commitment Problem | Accepted and presented in IEEE PES PowerTech, Eindhoven 2015, paper
ID 462731 | null | 10.1109/PTC.2015.7232646 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work we solve the day-ahead unit commitment (UC) problem, by
formulating it as a Markov decision process (MDP) and finding a low-cost policy
for generation scheduling. We present two reinforcement learning algorithms,
and devise a third one. We compare our results to previous work that uses
simulated annealing (SA), and show a 27% improvement in operation costs, with
running time of 2.5 minutes (compared to 2.5 hours of existing
state-of-the-art).
| [
{
"version": "v1",
"created": "Sun, 19 Jul 2015 09:32:40 GMT"
}
] | 1,479,340,800,000 | [
[
"Dalal",
"Gal",
""
],
[
"Mannor",
"Shie",
""
]
] |
1507.05275 | Swakkhar Shatabda | Shanjida Khatun, Hasib Ul Alam and Swakkhar Shatabda | An Efficient Genetic Algorithm for Discovering Diverse-Frequent Patterns | 2015 International Conference on Electrical Engineering and
Information Communication Technology (ICEEICT) | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Working with exhaustive search on large dataset is infeasible for several
reasons. Recently, developed techniques that made pattern set mining feasible
by a general solver with long execution time that supports heuristic search and
are limited to small datasets only. In this paper, we investigate an approach
which aims to find diverse set of patterns using genetic algorithm to mine
diverse frequent patterns. We propose a fast heuristic search algorithm that
outperforms state-of-the-art methods on a standard set of benchmarks and
capable to produce satisfactory results within a short period of time. Our
proposed algorithm uses a relative encoding scheme for the patterns and an
effective twin removal technique to ensure diversity throughout the search.
| [
{
"version": "v1",
"created": "Sun, 19 Jul 2015 10:55:09 GMT"
}
] | 1,437,436,800,000 | [
[
"Khatun",
"Shanjida",
""
],
[
"Alam",
"Hasib Ul",
""
],
[
"Shatabda",
"Swakkhar",
""
]
] |
1507.06045 | Gregory Hasseler | Gregory Hasseler | Adapting Stochastic Search For Real-time Dynamic Weighted Constraint
Satisfaction | 187 pages, Master's Thesis submitted to State University of New York
Institute of Technology | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents two new algorithms for performing constraint satisfaction.
The first algorithm presented, DMaxWalkSat, is a constraint solver specialized
for solving dynamic, weighted constraint satisfaction problems. The second
algorithm, RDMaxWalkSat, is a derivative of DMaxWalkSat that has been modified
into an anytime algorithm, and hence support realtime constraint satisfaction.
DMaxWalkSat is shown to offer performance advantages in terms of solution
quality and runtime over its parent constraint solver, MaxWalkSat. RDMaxWalkSat
is shown to support anytime operation. The introduction of these algorithms
brings another tool to the areas of computer science that naturally represent
problems as constraint satisfaction problems, an example of which is the robust
coherence algorithm.
| [
{
"version": "v1",
"created": "Wed, 22 Jul 2015 03:32:52 GMT"
}
] | 1,437,609,600,000 | [
[
"Hasseler",
"Gregory",
""
]
] |
1507.06500 | Hai Zhuge Mr | Hai Zhuge | Mapping Big Data into Knowledge Space with Cognitive
Cyber-Infrastructure | 59 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Big data research has attracted great attention in science, technology,
industry and society. It is developing with the evolving scientific paradigm,
the fourth industrial revolution, and the transformational innovation of
technologies. However, its nature and fundamental challenge have not been
recognized, and its own methodology has not been formed. This paper explores
and answers the following questions: What is big data? What are the basic
methods for representing, managing and analyzing big data? What is the
relationship between big data and knowledge? Can we find a mapping from big
data into knowledge space? What kind of infrastructure is required to support
not only big data management and analysis but also knowledge discovery, sharing
and management? What is the relationship between big data and science paradigm?
What is the nature and fundamental challenge of big data computing? A
multi-dimensional perspective is presented toward a methodology of big data
computing.
| [
{
"version": "v1",
"created": "Sat, 18 Jul 2015 21:38:21 GMT"
}
] | 1,437,696,000,000 | [
[
"Zhuge",
"Hai",
""
]
] |
1507.06566 | Mark Law | Mark Law, Alessandra Russo and Krysia Broda | Learning Weak Constraints in Answer Set Programming | To appear in Theory and Practice of Logic Programming (TPLP),
Proceedings of ICLP 2015 | Theory and Practice of Logic Programming 15 (2015) 511-525 | 10.1017/S1471068415000198 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contributes to the area of inductive logic programming by
presenting a new learning framework that allows the learning of weak
constraints in Answer Set Programming (ASP). The framework, called Learning
from Ordered Answer Sets, generalises our previous work on learning ASP
programs without weak constraints, by considering a new notion of examples as
ordered pairs of partial answer sets that exemplify which answer sets of a
learned hypothesis (together with a given background knowledge) are preferred
to others. In this new learning task inductive solutions are searched within a
hypothesis space of normal rules, choice rules, and hard and weak constraints.
We propose a new algorithm, ILASP2, which is sound and complete with respect to
our new learning framework. We investigate its applicability to learning
preferences in an interview scheduling problem and also demonstrate that when
restricted to the task of learning ASP programs without weak constraints,
ILASP2 can be much more efficient than our previously proposed system.
| [
{
"version": "v1",
"created": "Thu, 23 Jul 2015 17:03:39 GMT"
}
] | 1,582,070,400,000 | [
[
"Law",
"Mark",
""
],
[
"Russo",
"Alessandra",
""
],
[
"Broda",
"Krysia",
""
]
] |
1507.06689 | Sarah Alice Gaggl | Sarah A. Gaggl, Norbert Manthey, Alessandro Ronca, Johannes P.
Wallner, Stefan Woltran | Improved Answer-Set Programming Encodings for Abstract Argumentation | To appear in Theory and Practice of Logic Programming (TPLP),
Proceedings of ICLP 2015 | Theory and Practice of Logic Programming 15 (2015) 434-448 | 10.1017/S1471068415000149 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The design of efficient solutions for abstract argumentation problems is a
crucial step towards advanced argumentation systems. One of the most prominent
approaches in the literature is to use Answer-Set Programming (ASP) for this
endeavor. In this paper, we present new encodings for three prominent
argumentation semantics using the concept of conditional literals in
disjunctions as provided by the ASP-system clingo. Our new encodings are not
only more succinct than previous versions, but also outperform them on standard
benchmarks.
| [
{
"version": "v1",
"created": "Thu, 23 Jul 2015 21:43:48 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Oct 2015 13:54:18 GMT"
}
] | 1,582,070,400,000 | [
[
"Gaggl",
"Sarah A.",
""
],
[
"Manthey",
"Norbert",
""
],
[
"Ronca",
"Alessandro",
""
],
[
"Wallner",
"Johannes P.",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1507.07058 | Azlan Iqbal | Azlan Iqbal, Matej Guid, Simon Colton, Jana Krivec, Shazril Azman,
Boshra Haghighi | The Digital Synaptic Neural Substrate: A New Approach to Computational
Creativity | 39 pages, 5 appendices. Full version:
http://www.springer.com/gp/book/9783319280783 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new artificial intelligence (AI) approach called, the 'Digital
Synaptic Neural Substrate' (DSNS). It uses selected attributes from objects in
various domains (e.g. chess problems, classical music, renowned artworks) and
recombines them in such a way as to generate new attributes that can then, in
principle, be used to create novel objects of creative value to humans relating
to any one of the source domains. This allows some of the burden of creative
content generation to be passed from humans to machines. The approach was
tested in the domain of chess problem composition. We used it to automatically
compose numerous sets of chess problems based on attributes extracted and
recombined from chess problems and tournament games by humans, renowned
paintings, computer-evolved abstract art, photographs of people, and classical
music tracks. The quality of these generated chess problems was then assessed
automatically using an existing and experimentally-validated computational
chess aesthetics model. They were also assessed by human experts in the domain.
The results suggest that attributes collected and recombined from chess and
other domains using the DSNS approach can indeed be used to automatically
generate chess problems of reasonably high aesthetic quality. In particular, a
low quality chess source (i.e. tournament game sequences between weak players)
used in combination with actual photographs of people was able to produce
three-move chess problems of comparable quality or better to those generated
using a high quality chess source (i.e. published compositions by human
experts), and more efficiently as well. Why information from a foreign domain
can be integrated and functional in this way remains an open question for now.
The DSNS approach is, in principle, scalable and applicable to any domain in
which objects have attributes that can be represented using real numbers.
| [
{
"version": "v1",
"created": "Sat, 25 Jul 2015 03:00:31 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Sep 2016 08:10:17 GMT"
}
] | 1,474,416,000,000 | [
[
"Iqbal",
"Azlan",
""
],
[
"Guid",
"Matej",
""
],
[
"Colton",
"Simon",
""
],
[
"Krivec",
"Jana",
""
],
[
"Azman",
"Shazril",
""
],
[
"Haghighi",
"Boshra",
""
]
] |
1507.07462 | Florentin Smarandache | Florentin Smarandache | Unification of Fusion Theories, Rules, Filters, Image Fusion and Target
Tracking Methods (UFT) | 79 pages, a diagram. arXiv admin note: substantial text overlap with
arXiv:cs/0409040, arXiv:0901.1289, arXiv:cs/0410033 | International Journal of Applied Mathematics & Statistics, Vol. 2,
1-14, 2004 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The author has pledged in various papers, conference or seminar
presentations, and scientific grant applications (between 2004-2015) for the
unification of fusion theories, combinations of fusion rules, image fusion
procedures, filter algorithms, and target tracking methods for more accurate
applications to our real world problems - since neither fusion theory nor
fusion rule fully satisfy all needed applications. For each particular
application, one selects the most appropriate fusion space and fusion model,
then the fusion rules, and the algorithms of implementation. He has worked in
the Unification of the Fusion Theories (UFT), which looks like a cooking
recipe, better one could say like a logical chart for a computer programmer,
but one does not see another method to comprise/unify all things. The
unification scenario presented herein, which is now in an incipient form,
should periodically be updated incorporating new discoveries from the fusion
and engineering research.
| [
{
"version": "v1",
"created": "Mon, 27 Jul 2015 15:59:03 GMT"
}
] | 1,438,041,600,000 | [
[
"Smarandache",
"Florentin",
""
]
] |
1507.07648 | Rehan Abdul Aziz | Rehan Abdul Aziz and Geoffrey Chu and Christian Muise and Peter
Stuckey | Projected Model Counting | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model counting is the task of computing the number of assignments to
variables V that satisfy a given propositional theory F. Model counting is an
essential tool in probabilistic reasoning. In this paper, we introduce the
problem of model counting projected on a subset P of original variables that we
call 'priority' variables. The task is to compute the number of assignments to
P such that there exists an extension to 'non-priority' variables V\P that
satisfies F. Projected model counting arises when some parts of the model are
irrelevant to the counts, in particular when we require additional variables to
model the problem we are counting in SAT. We discuss three different approaches
to projected model counting (two of which are novel), and compare their
performance on different benchmark problems.
To appear in 18th International Conference on Theory and Applications of
Satisfiability Testing, September 24-27, 2015, Austin, Texas, USA
| [
{
"version": "v1",
"created": "Tue, 28 Jul 2015 05:45:05 GMT"
}
] | 1,438,128,000,000 | [
[
"Aziz",
"Rehan Abdul",
""
],
[
"Chu",
"Geoffrey",
""
],
[
"Muise",
"Christian",
""
],
[
"Stuckey",
"Peter",
""
]
] |
1507.07749 | Joseph Ramsey | Joseph D. Ramsey | Scaling up Greedy Causal Search for Continuous Variables | 12 pages, 2 figures, tech report for Center for Causal Discovery | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As standardly implemented in R or the Tetrad program, causal search
algorithms used most widely or effectively by scientists have severe
dimensionality constraints that make them inappropriate for big data problems
without sacrificing accuracy. However, implementation improvements are
possible. We explore optimizations for the Greedy Equivalence Search that allow
search on 50,000-variable problems in 13 minutes for sparse models with 1000
samples on a four-processor, 16G laptop computer. We finish a problem with 1000
samples on 1,000,000 variables in 18 hours for sparse models on a supercomputer
node at the Pittsburgh Supercomputing Center with 40 processors and 384 G RAM.
The same algorithm can be applied to discrete data, with a slower discrete
score, though the discrete implementation currently does not scale as well in
our experiments; we have managed to scale up to about 10,000 variables in
sparse models with 1000 samples.
| [
{
"version": "v1",
"created": "Tue, 28 Jul 2015 12:59:19 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Nov 2015 22:55:28 GMT"
}
] | 1,447,372,800,000 | [
[
"Ramsey",
"Joseph D.",
""
]
] |
1507.08073 | Jian Yu | Jian Yu | Communication: Words and Conceptual Systems | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Words (phrases or symbols) play a key role in human life. Word (phrase or
symbol) representation is the fundamental problem for knowledge representation
and understanding. A word (phrase or symbol) usually represents a name of a
category. However, it is always a challenge that how to represent a category
can make it easily understood. In this paper, a new representation for a
category is discussed, which can be considered a generalization of classic set.
In order to reduce representation complexity, the economy principle of category
representation is proposed. The proposed category representation provides a
powerful tool for analyzing conceptual systems, relations between words,
communication, knowledge, situations. More specifically, the conceptual system,
word relations and communication are mathematically defined and classified such
as ideal conceptual system, perfect communication and so on; relation between
words and sentences is also studied, which shows that knowledge are words.
Furthermore, how conceptual systems and words depend on situations is
presented, and how truth is defined is also discussed.
| [
{
"version": "v1",
"created": "Wed, 29 Jul 2015 09:21:15 GMT"
},
{
"version": "v10",
"created": "Tue, 15 Sep 2015 09:23:02 GMT"
},
{
"version": "v11",
"created": "Wed, 16 Sep 2015 02:13:24 GMT"
},
{
"version": "v12",
"created": "Wed, 28 Oct 2015 00:56:45 GMT"
},
{
"version": "v13",
"created": "Mon, 16 Nov 2015 02:12:17 GMT"
},
{
"version": "v14",
"created": "Fri, 4 Dec 2015 03:36:06 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Aug 2015 12:13:07 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Aug 2015 14:24:38 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Aug 2015 14:02:14 GMT"
},
{
"version": "v5",
"created": "Wed, 26 Aug 2015 16:58:08 GMT"
},
{
"version": "v6",
"created": "Thu, 27 Aug 2015 14:39:39 GMT"
},
{
"version": "v7",
"created": "Mon, 31 Aug 2015 03:35:03 GMT"
},
{
"version": "v8",
"created": "Sun, 6 Sep 2015 22:23:44 GMT"
},
{
"version": "v9",
"created": "Wed, 9 Sep 2015 09:37:39 GMT"
}
] | 1,449,446,400,000 | [
[
"Yu",
"Jian",
""
]
] |
1507.08444 | Indre Zliobaite | Indre Zliobaite and Mikhail Khokhlov | Optimal estimates for short horizon travel time prediction in urban
areas | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasing popularity of mobile route planning applications based on GPS
technology provides opportunities for collecting traffic data in urban
environments. One of the main challenges for travel time estimation and
prediction in such a setting is how to aggregate data from vehicles that have
followed different routes, and predict travel time for other routes of
interest. One approach is to predict travel times for route segments, and sum
those estimates to obtain a prediction for the whole route. We study how to
obtain optimal predictions in this scenario. It appears that the optimal
estimate, minimizing the expected mean absolute error, is a combination of the
mean and the median travel times on each segment, where the combination
function depends on the number of segments in the route of interest. We present
a methodology for obtaining such predictions, and demonstrate its effectiveness
with a case study using travel time data from a district of St. Petersburg
collected over one year. The proposed methodology can be applied for real-time
prediction of expected travel times in an urban road network.
| [
{
"version": "v1",
"created": "Thu, 30 Jul 2015 10:46:52 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Aug 2015 08:45:42 GMT"
}
] | 1,439,251,200,000 | [
[
"Zliobaite",
"Indre",
""
],
[
"Khokhlov",
"Mikhail",
""
]
] |
1507.08559 | Ganesh Ram Santhanam | Ganesh Ram Santhanam and Samik Basu and Vasant Honavar | CRISNER: A Practically Efficient Reasoner for Qualitative Preferences | 15 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present CRISNER (Conditional & Relative Importance Statement Network
PrEference Reasoner), a tool that provides practically efficient as well as
exact reasoning about qualitative preferences in popular ceteris paribus
preference languages such as CP-nets, TCP-nets, CP-theories, etc. The tool uses
a model checking engine to translate preference specifications and queries into
appropriate Kripke models and verifiable properties over them respectively. The
distinguishing features of the tool are: (1) exact and provably correct query
answering for testing dominance, consistency with respect to a preference
specification, and testing equivalence and subsumption of two sets of
preferences; (2) automatic generation of proofs evidencing the correctness of
answer produced by CRISNER to any of the above queries; (3) XML inputs and
outputs that make it portable and pluggable into other applications. We also
describe the extensible architecture of CRISNER, which can be extended to new
reference formalisms based on ceteris paribus semantics that may be developed
in the future.
| [
{
"version": "v1",
"created": "Thu, 30 Jul 2015 16:03:48 GMT"
}
] | 1,438,300,800,000 | [
[
"Santhanam",
"Ganesh Ram",
""
],
[
"Basu",
"Samik",
""
],
[
"Honavar",
"Vasant",
""
]
] |
1507.08826 | Matteo Brunelli | Matteo Brunelli | Studying a set of properties of inconsistency indices for pairwise
comparisons | null | null | 10.1007/s10479-016-2166-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pairwise comparisons between alternatives are a well-established tool to
decompose decision problems into smaller and more easily tractable
sub-problems. However, due to our limited rationality, the subjective
preferences expressed by decision makers over pairs of alternatives can hardly
ever be consistent. Therefore, several inconsistency indices have been proposed
in the literature to quantify the extent of the deviation from complete
consistency. Only recently, a set of properties has been proposed to define a
family of functions representing inconsistency indices. The scope of this paper
is twofold. Firstly, it expands the set of properties by adding and justifying
a new one. Secondly, it continues the study of inconsistency indices to check
whether or not they satisfy the above mentioned properties. Out of the four
indices considered in this paper, in its present form, two fail to satisfy some
properties. An adjusted version of one index is proposed so that it fulfills
them.
| [
{
"version": "v1",
"created": "Fri, 31 Jul 2015 10:56:40 GMT"
},
{
"version": "v2",
"created": "Mon, 14 Mar 2016 08:36:12 GMT"
}
] | 1,458,000,000,000 | [
[
"Brunelli",
"Matteo",
""
]
] |
1508.00019 | Michael S. Gashler Ph.D. | Michael S. Gashler and Zachariah Kindle and Michael R. Smith | A Minimal Architecture for General Cognition | 8 pages, 8 figures, conference, Proceedings of the 2015 International
Joint Conference on Neural Networks | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A minimalistic cognitive architecture called MANIC is presented. The MANIC
architecture requires only three function approximating models, and one state
machine. Even with so few major components, it is theoretically sufficient to
achieve functional equivalence with all other cognitive architectures, and can
be practically trained. Instead of seeking to transfer architectural
inspiration from biology into artificial intelligence, MANIC seeks to minimize
novelty and follow the most well-established constructs that have evolved
within various sub-fields of data science. From this perspective, MANIC offers
an alternate approach to a long-standing objective of artificial intelligence.
This paper provides a theoretical analysis of the MANIC architecture.
| [
{
"version": "v1",
"created": "Fri, 31 Jul 2015 20:21:38 GMT"
}
] | 1,438,646,400,000 | [
[
"Gashler",
"Michael S.",
""
],
[
"Kindle",
"Zachariah",
""
],
[
"Smith",
"Michael R.",
""
]
] |
1508.00212 | Jakub Kowalski | Jakub Kowalski, Marek Szyku{\l}a | Procedural Content Generation for GDL Descriptions of Simplified
Boardgames | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | We present initial research towards procedural generation of Simplified
Boardgames and translating them into an efficient GDL code. This is a step
towards establishing Simplified Boardgames as a comparison class for General
Game Playing agents. To generate playable, human readable, and balanced
chess-like games we use an adaptive evolutionary algorithm with the fitness
function based on simulated playouts. In future, we plan to use the proposed
method to diversify and extend the set of GGP tournament games by those with
fully automatically generated rules.
| [
{
"version": "v1",
"created": "Sun, 2 Aug 2015 10:11:38 GMT"
}
] | 1,438,646,400,000 | [
[
"Kowalski",
"Jakub",
""
],
[
"Szykuła",
"Marek",
""
]
] |
1508.00280 | Johannes Textor | Johannes Textor, Alexander Idelberger, Maciej Li\'skiewicz | Learning from Pairwise Marginal Independencies | 10 pages, 6 figures, 2 tables. Published at the 31st Conference on
Uncertainty in Artificial Intelligence (UAI 2015) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider graphs that represent pairwise marginal independencies amongst a
set of variables (for instance, the zero entries of a covariance matrix for
normal data). We characterize the directed acyclic graphs (DAGs) that
faithfully explain a given set of independencies, and derive algorithms to
efficiently enumerate such structures. Our results map out the space of
faithful causal models for a given set of pairwise marginal independence
relations. This allows us to show the extent to which causal inference is
possible without using conditional independence tests.
| [
{
"version": "v1",
"created": "Sun, 2 Aug 2015 20:13:41 GMT"
}
] | 1,438,646,400,000 | [
[
"Textor",
"Johannes",
""
],
[
"Idelberger",
"Alexander",
""
],
[
"Liśkiewicz",
"Maciej",
""
]
] |
1508.00377 | Martin Cerny | Martin \v{C}ern\'y, Tom\'a\v{s} Plch, Mat\v{e}j Marko, Jakub Gemrot,
Petr Ondr\'a\v{c}ek, Cyril Brom | Using Behavior Objects to Manage Complexity in Virtual Worlds | Currently under review in IEEE Transactions on Computational
Intelligence and AI in Games | null | 10.1109/TCIAIG.2016.2528499 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quality of high-level AI of non-player characters (NPCs) in commercial
open-world games (OWGs) has been increasing during the past years. However, due
to constraints specific to the game industry, this increase has been slow and
it has been driven by larger budgets rather than adoption of new complex AI
techniques. Most of the contemporary AI is still expressed as hard-coded
scripts. The complexity and manageability of the script codebase is one of the
key limiting factors for further AI improvements. In this paper we address this
issue. We present behavior objects - a general approach to development of NPC
behaviors for large OWGs. Behavior objects are inspired by object-oriented
programming and extend the concept of smart objects. Our approach promotes
encapsulation of data and code for multiple related behaviors in one place,
hiding internal details and embedding intelligence in the environment. Behavior
objects are a natural abstraction of five different techniques that we have
implemented to manage AI complexity in an upcoming AAA OWG. We report the
details of the implementations in the context of behavior trees and the lessons
learned during development. Our work should serve as inspiration for AI
architecture designers from both the academia and the industry.
| [
{
"version": "v1",
"created": "Mon, 3 Aug 2015 11:29:21 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Nov 2015 19:05:13 GMT"
}
] | 1,456,185,600,000 | [
[
"Černý",
"Martin",
""
],
[
"Plch",
"Tomáš",
""
],
[
"Marko",
"Matěj",
""
],
[
"Gemrot",
"Jakub",
""
],
[
"Ondráček",
"Petr",
""
],
[
"Brom",
"Cyril",
""
]
] |
1508.00801 | Mehdi Kaytoue | Olivier Cavadenti and Victor Codocedo and Jean-Fran\c{c}ois Boulicaut
and Mehdi Kaytoue | Identifying Avatar Aliases in Starcraft 2 | Machine Learning and Data Mining for Sports Analytics ECML/PKDD 2015
workshop, 11 September 2015, Porto, Portugal | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In electronic sports, cyberathletes conceal their online training using
different avatars (virtual identities), allowing them not being recognized by
the opponents they may face in future competitions. In this article, we propose
a method to tackle this avatar aliases identification problem. Our method
trains a classifier on behavioural data and processes the confusion matrix to
output label pairs which concentrate confusion. We experimented with Starcraft
2 and report our first results.
| [
{
"version": "v1",
"created": "Tue, 4 Aug 2015 15:37:44 GMT"
}
] | 1,438,732,800,000 | [
[
"Cavadenti",
"Olivier",
""
],
[
"Codocedo",
"Victor",
""
],
[
"Boulicaut",
"Jean-François",
""
],
[
"Kaytoue",
"Mehdi",
""
]
] |
1508.00879 | Ankit Agrawal | Ankit Agrawal | Qualitative Decision Methods for Multi-Attribute Decision Making | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fundamental problem underlying all multi-criteria decision analysis
(MCDA) problems is that of dominance between any two alternatives: "Given two
alternatives A and B, each described by a set criteria, is A preferred to B
with respect to a set of decision maker (DM) preferences over the criteria?".
Depending on the application in which MCDA is performed, the alternatives may
represent strategies and policies for business, potential locations for setting
up new facilities, designs of buildings, etc. The general objective of MCDA is
to enable the DM to order all alternatives in order of the stated preferences,
and choose the ones that are best, i.e., optimal with respect to the
preferences over the criteria. This article presents and summarizes a recently
developed MCDA framework that orders the set of alternatives when the relative
importance preferences are incomplete, imprecise, or qualitative in nature.
| [
{
"version": "v1",
"created": "Tue, 4 Aug 2015 19:27:21 GMT"
}
] | 1,438,732,800,000 | [
[
"Agrawal",
"Ankit",
""
]
] |
1508.00986 | Zhuoran Wang | Zhuoran Wang, Paul A. Crook, Wenshuo Tang, Oliver Lemon | On the Linear Belief Compression of POMDPs: A re-examination of current
methods | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Belief compression improves the tractability of large-scale partially
observable Markov decision processes (POMDPs) by finding projections from
high-dimensional belief space onto low-dimensional approximations, where
solving to obtain action selection policies requires fewer computations. This
paper develops a unified theoretical framework to analyse three existing linear
belief compression approaches, including value-directed compression and two
non-negative matrix factorisation (NMF) based algorithms. The results indicate
that all the three known belief compression methods have their own critical
deficiencies. Therefore, projective NMF belief compression is proposed (P-NMF),
aiming to overcome the drawbacks of the existing techniques. The performance of
the proposed algorithm is examined on four POMDP problems of reasonably large
scale, in comparison with existing techniques. Additionally, the
competitiveness of belief compression is compared empirically to a
state-of-the-art heuristic search based POMDP solver and their relative merits
in solving large-scale POMDPs are investigated.
| [
{
"version": "v1",
"created": "Wed, 5 Aug 2015 06:45:09 GMT"
}
] | 1,438,819,200,000 | [
[
"Wang",
"Zhuoran",
""
],
[
"Crook",
"Paul A.",
""
],
[
"Tang",
"Wenshuo",
""
],
[
"Lemon",
"Oliver",
""
]
] |
1508.01191 | Waldemar Koczkodaj Prof. | J. Fueloep, W.W. Koczkodaj, S.J. Szarek | A different perspective on a scale for pairwise comparisons | 16 pages, 1 figure; the mathematical theory has been provided for the
use of small scale (1 to 3) for pairwise comparisons (but not only) | Logic Journal of the IGPL Volume: 18 Issue: 6 Pages: 859-869
Published: DEC 2010 | 10.1093/jigpal/jzp062 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the major challenges for collective intelligence is inconsistency,
which is unavoidable whenever subjective assessments are involved. Pairwise
comparisons allow one to represent such subjective assessments and to process
them by analyzing, quantifying and identifying the inconsistencies.
We propose using smaller scales for pairwise comparisons and provide
mathematical and practical justifications for this change. Our postulate's aim
is to initiate a paradigm shift in the search for a better scale construction
for pairwise comparisons. Beyond pairwise comparisons, the results presented
may be relevant to other methods using subjective scales.
Keywords: pairwise comparisons, collective intelligence, scale, subjective
assessment, inaccuracy, inconsistency.
| [
{
"version": "v1",
"created": "Wed, 5 Aug 2015 19:50:21 GMT"
}
] | 1,438,819,200,000 | [
[
"Fueloep",
"J.",
""
],
[
"Koczkodaj",
"W. W.",
""
],
[
"Szarek",
"S. J.",
""
]
] |
1508.03523 | Jesus Cerquides | Jordi Roca-Lacostena and Jesus Cerquides and Marc Pouly | Sufficient and necessary conditions for Dynamic Programming in
Valuation-Based Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Valuation algebras abstract a large number of formalisms for automated
reasoning and enable the definition of generic inference procedures. Many of
these formalisms provide some notion of solution. Typical examples are
satisfying assignments in constraint systems, models in logics or solutions to
linear equation systems.
Many widely used dynamic programming algorithms for optimization problems
rely on low treewidth decompositions and can be understood as particular cases
of a single algorithmic scheme for finding solutions in a valuation algebra.
The most encompassing description of this algorithmic scheme to date has been
proposed by Pouly and Kohlas together with sufficient conditions for its
correctness. Unfortunately, the formalization relies on a theorem for which we
provide counterexamples. In spite of that, the mainline of Pouly and Kohlas'
theory is correct, although some of the necessary conditions have to be
revised. In this paper we analyze the impact that the counter-examples have on
the theory, and rebuild the theory providing correct sufficient conditions for
the algorithms. Furthermore, we also provide necessary conditions for the
algorithms, allowing for a sharper characterization of when the algorithmic
scheme can be applied.
| [
{
"version": "v1",
"created": "Fri, 14 Aug 2015 14:51:54 GMT"
}
] | 1,439,769,600,000 | [
[
"Roca-Lacostena",
"Jordi",
""
],
[
"Cerquides",
"Jesus",
""
],
[
"Pouly",
"Marc",
""
]
] |
1508.03671 | Ibrahim Ozkan | Ibrahim Ozkan, I. Burhan Turksen | Fuzzy Longest Common Subsequence Matching With FCM Using R | Prepared April 17, 2013. 26 Pages, updated March 10, 2016 included R
code | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Capturing the interdependencies between real valued time series can be
achieved by finding common similar patterns. The abstraction of time series
makes the process of finding similarities closer to the way as humans do.
Therefore, the abstraction by means of a symbolic levels and finding the common
patterns attracts researchers. One particular algorithm, Longest Common
Subsequence, has been used successfully as a similarity measure between two
sequences including real valued time series. In this paper, we propose Fuzzy
Longest Common Subsequence matching for time series.
| [
{
"version": "v1",
"created": "Fri, 14 Aug 2015 22:19:48 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Dec 2016 17:53:53 GMT"
}
] | 1,482,192,000,000 | [
[
"Ozkan",
"Ibrahim",
""
],
[
"Turksen",
"I. Burhan",
""
]
] |
1508.03812 | Thuc Le Ph.D | Jiuyong Li, Saisai Ma, Thuc Duy Le, Lin Liu and Jixue Liu | Causal Decision Trees | null | null | 10.1109/TKDE.2016.2619350 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Uncovering causal relationships in data is a major objective of data
analytics. Causal relationships are normally discovered with designed
experiments, e.g. randomised controlled trials, which, however are expensive or
infeasible to be conducted in many cases. Causal relationships can also be
found using some well designed observational studies, but they require domain
experts' knowledge and the process is normally time consuming. Hence there is a
need for scalable and automated methods for causal relationship exploration in
data. Classification methods are fast and they could be practical substitutes
for finding causal signals in data. However, classification methods are not
designed for causal discovery and a classification method may find false causal
signals and miss the true ones. In this paper, we develop a causal decision
tree where nodes have causal interpretations. Our method follows a well
established causal inference framework and makes use of a classic statistical
test. The method is practical for finding causal signals in large data sets.
| [
{
"version": "v1",
"created": "Sun, 16 Aug 2015 11:31:49 GMT"
}
] | 1,477,958,400,000 | [
[
"Li",
"Jiuyong",
""
],
[
"Ma",
"Saisai",
""
],
[
"Le",
"Thuc Duy",
""
],
[
"Liu",
"Lin",
""
],
[
"Liu",
"Jixue",
""
]
] |
1508.03819 | Thuc Le Ph.D | Jiuyong Li, Thuc Duy Le, Lin Liu, Jixue Liu, Zhou Jin, Bingyu Sun,
Saisai Ma | From Observational Studies to Causal Rule Mining | This paper has been accepted by ACM TIST journal and will be
available soon | ACM Trans. Intell. Syst. Technol. 7, 2, Article 14 (November
2015), 27 pages | 10.1145/2746410 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Randomised controlled trials (RCTs) are the most effective approach to causal
discovery, but in many circumstances it is impossible to conduct RCTs.
Therefore observational studies based on passively observed data are widely
accepted as an alternative to RCTs. However, in observational studies, prior
knowledge is required to generate the hypotheses about the cause-effect
relationships to be tested, hence they can only be applied to problems with
available domain knowledge and a handful of variables. In practice, many data
sets are of high dimensionality, which leaves observational studies out of the
opportunities for causal discovery from such a wealth of data sources. In
another direction, many efficient data mining methods have been developed to
identify associations among variables in large data sets. The problem is,
causal relationships imply associations, but the reverse is not always true.
However we can see the synergy between the two paradigms here. Specifically,
association rule mining can be used to deal with the high-dimensionality
problem while observational studies can be utilised to eliminate non-causal
associations. In this paper we propose the concept of causal rules (CRs) and
develop an algorithm for mining CRs in large data sets. We use the idea of
retrospective cohort studies to detect CRs based on the results of association
rule mining. Experiments with both synthetic and real world data sets have
demonstrated the effectiveness and efficiency of CR mining. In comparison with
the commonly used causal discovery methods, the proposed approach in general is
faster and has better or competitive performance in finding correct or sensible
causes. It is also capable of finding a cause consisting of multiple variables,
a feature that other causal discovery methods do not possess.
| [
{
"version": "v1",
"created": "Sun, 16 Aug 2015 12:33:18 GMT"
}
] | 1,478,822,400,000 | [
[
"Li",
"Jiuyong",
""
],
[
"Le",
"Thuc Duy",
""
],
[
"Liu",
"Lin",
""
],
[
"Liu",
"Jixue",
""
],
[
"Jin",
"Zhou",
""
],
[
"Sun",
"Bingyu",
""
],
[
"Ma",
"Saisai",
""
]
] |
1508.04032 | Yexiang Xue | Yexiang Xue, Stefano Ermon, Ronan Le Bras, Carla P. Gomes, Bart Selman | Variable Elimination in the Fourier Domain | Proceedings of the 33rd International Conference on Machine Learning
(ICML), 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to represent complex high dimensional probability distributions
in a compact form is one of the key insights in the field of graphical models.
Factored representations are ubiquitous in machine learning and lead to major
computational advantages. We explore a different type of compact representation
based on discrete Fourier representations, complementing the classical approach
based on conditional independencies. We show that a large class of
probabilistic graphical models have a compact Fourier representation. This
theoretical result opens up an entirely new way of approximating a probability
distribution. We demonstrate the significance of this approach by applying it
to the variable elimination algorithm. Compared with the traditional bucket
representation and other approximate inference algorithms, we obtain
significant improvements.
| [
{
"version": "v1",
"created": "Mon, 17 Aug 2015 14:04:07 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Jun 2016 03:18:10 GMT"
}
] | 1,466,640,000,000 | [
[
"Xue",
"Yexiang",
""
],
[
"Ermon",
"Stefano",
""
],
[
"Bras",
"Ronan Le",
""
],
[
"Gomes",
"Carla P.",
""
],
[
"Selman",
"Bart",
""
]
] |
1508.04087 | J. G. Wolff | J. G. Wolff | The SP theory of intelligence: distinctive features and advantages | null | IEEE Access, 4, 216-246, 2016 | 10.1109/ACCESS.2015.2513822 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper highlights distinctive features of the "SP theory of intelligence"
and its apparent advantages compared with some AI-related alternatives.
Distinctive features and advantages are: simplification and integration of
observations and concepts; simplification and integration of structures and
processes in computing systems; the theory is itself a theory of computing; it
can be the basis for new architectures for computers; information compression
via the matching and unification of patterns and, more specifically, via
multiple alignment, is fundamental; transparency in the representation and
processing of knowledge; the discovery of 'natural' structures via information
compression (DONSVIC); interpretations of mathematics; interpretations in human
perception and cognition; and realisation of abstract concepts in terms of
neurons and their inter-connections ("SP-neural"). These things relate to
AI-related alternatives: minimum length encoding and related concepts; deep
learning in neural networks; unified theories of cognition and related
research; universal search; Bayesian networks and more; pattern recognition and
vision; the analysis, production, and translation of natural language;
Unsupervised learning of natural language; exact and inexact forms of
reasoning; representation and processing of diverse forms of knowledge; IBM's
Watson; software engineering; solving problems associated with big data, and in
the development of intelligence in autonomous robots. In conclusion, the SP
system can provide a firm foundation for the long-term development of AI, with
many potential benefits and applications. It may also deliver useful results on
relatively short timescales. A high-parallel, open-source version of the SP
machine, derived from the SP computer model, would be a means for researchers
everywhere to explore what can be done with the system, and to create new
versions of it.
| [
{
"version": "v1",
"created": "Mon, 17 Aug 2015 17:15:13 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Aug 2015 08:48:08 GMT"
},
{
"version": "v3",
"created": "Thu, 17 Sep 2015 10:16:04 GMT"
},
{
"version": "v4",
"created": "Fri, 6 Nov 2015 17:59:52 GMT"
},
{
"version": "v5",
"created": "Sun, 20 Dec 2015 12:05:50 GMT"
},
{
"version": "v6",
"created": "Tue, 15 Mar 2016 16:09:02 GMT"
}
] | 1,458,086,400,000 | [
[
"Wolff",
"J. G.",
""
]
] |
1508.04261 | Paolo Campigotto | Paolo Campigotto, Roberto Battiti, Andrea Passerini | Learning Modulo Theories for preference elicitation in hybrid domains | 50 pages, 3 figures, submitted to Artificial Intelligence Journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces CLEO, a novel preference elicitation algorithm capable
of recommending complex objects in hybrid domains, characterized by both
discrete and continuous attributes and constraints defined over them. The
algorithm assumes minimal initial information, i.e., a set of catalog
attributes, and defines decisional features as logic formulae combining Boolean
and algebraic constraints over the attributes. The (unknown) utility of the
decision maker (DM) is modelled as a weighted combination of features. CLEO
iteratively alternates a preference elicitation step, where pairs of candidate
solutions are selected based on the current utility model, and a refinement
step where the utility is refined by incorporating the feedback received. The
elicitation step leverages a Max-SMT solver to return optimal hybrid solutions
according to the current utility model. The refinement step is implemented as
learning to rank, and a sparsifying norm is used to favour the selection of few
informative features in the combinatorial space of candidate decisional
features.
CLEO is the first preference elicitation algorithm capable of dealing with
hybrid domains, thanks to the use of Max-SMT technology, while retaining
uncertainty in the DM utility and noisy feedback. Experimental results on
complex recommendation tasks show the ability of CLEO to quickly focus towards
optimal solutions, as well as its capacity to recover from suboptimal initial
choices. While no competitors exist in the hybrid setting, CLEO outperforms a
state-of-the-art Bayesian preference elicitation algorithm when applied to a
purely discrete task.
| [
{
"version": "v1",
"created": "Tue, 18 Aug 2015 09:50:33 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Aug 2015 10:29:29 GMT"
},
{
"version": "v3",
"created": "Mon, 31 Aug 2015 09:37:08 GMT"
}
] | 1,441,065,600,000 | [
[
"Campigotto",
"Paolo",
""
],
[
"Battiti",
"Roberto",
""
],
[
"Passerini",
"Andrea",
""
]
] |
1508.04570 | J. G. Wolff | J. Gerard Wolff, Vasile Palade | Proposal for the creation of a research facility for the development of
the SP machine | arXiv admin note: text overlap with arXiv:1508.04087. substantial
text overlap with arXiv:1409.8027 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is a proposal to create a research facility for the development of a
high-parallel version of the "SP machine", based on the "SP theory of
intelligence". We envisage that the new version of the SP machine will be an
open-source software virtual machine, derived from the existing "SP computer
model", and hosted on an existing high-performance computer. It will be a means
for researchers everywhere to explore what can be done with the system and to
create new versions of it. The SP system is a unique attempt to simplify and
integrate observations and concepts across artificial intelligence, mainstream
computing, mathematics, and human perception and cognition, with information
compression as a unifying theme. Potential benefits and applications include
helping to solve problems associated with big data; facilitating the
development of autonomous robots; unsupervised learning, natural language
processing, several kinds of reasoning, fuzzy pattern recognition at multiple
levels of abstraction, computer vision, best-match and semantic forms of
information retrieval, software engineering, medical diagnosis, simplification
of computing systems, and the seamless integration of diverse kinds of
knowledge and diverse aspects of intelligence. Additional motivations include
the potential of the SP system to help solve problems in defence, security, and
the detection and prevention of crime; potential in terms of economic, social,
environmental, and academic criteria, and in terms of publicity; and the
potential for international influence in research. The main elements of the
proposed facility are described, including support for the development of
"SP-neural", a neural version of the SP machine. The facility should be
permanent in the sense that it should be available for the foreseeable future,
and it should be designed to facilitate its use by researchers anywhere in the
world.
| [
{
"version": "v1",
"created": "Wed, 19 Aug 2015 09:03:18 GMT"
}
] | 1,440,028,800,000 | [
[
"Wolff",
"J. Gerard",
""
],
[
"Palade",
"Vasile",
""
]
] |
1508.04633 | Johannes Textor | Johannes Textor | Drawing and Analyzing Causal DAGs with DAGitty | 15 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DAGitty is a software for drawing and analyzing causal diagrams, also known
as directed acyclic graphs (DAGs). Functions include identification of minimal
sufficient adjustment sets for estimating causal effects, diagnosis of
insufficient or invalid adjustment via the identification of biasing paths,
identification of instrumental variables, and derivation of testable
implications. DAGitty is provided in the hope that it is useful for researchers
and students in Epidemiology, Sociology, Psychology, and other empirical
disciplines. The software should run in any web browser that supports modern
JavaScript, HTML, and SVG. This is the user manual for DAGitty version 2.3. The
manual is updated with every release of a new stable version. DAGitty is
available at dagitty.net.
| [
{
"version": "v1",
"created": "Wed, 19 Aug 2015 13:11:32 GMT"
}
] | 1,440,028,800,000 | [
[
"Textor",
"Johannes",
""
]
] |
1508.04872 | Yustinus Soelistio Eko | Ardy Wibowo Haryanto, Adhi Kusnadi, Yustinus Eko Soelistio | Warehouse Layout Method Based on Ant Colony and Backtracking Algorithm | 5 pages, published in proceeding of the 14th IAPR International
Conference on Quality in Research (QIR) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Warehouse is one of the important aspects of a company. Therefore, it is
necessary to improve Warehouse Management System (WMS) to have a simple
function that can determine the layout of the storage goods. In this paper we
propose an improved warehouse layout method based on ant colony algorithm and
backtracking algorithm. The method works on two steps. First, it generates a
solutions parameter tree from backtracking algorithm. Then second, it deducts
the solutions parameter by using a combination of ant colony algorithm and
backtracking algorithm. This method was tested by measuring the time needed to
build the tree and to fill up the space using two scenarios. The method needs
0.294 to 33.15 seconds to construct the tree and 3.23 seconds (best case) to
61.41 minutes (worst case) to fill up the warehouse. This method is proved to
be an attractive alternative solution for warehouse layout system.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 04:12:54 GMT"
}
] | 1,440,115,200,000 | [
[
"Haryanto",
"Ardy Wibowo",
""
],
[
"Kusnadi",
"Adhi",
""
],
[
"Soelistio",
"Yustinus Eko",
""
]
] |
1508.04885 | Michelle Blom | Michelle Blom, Peter J. Stuckey, Vanessa J. Teague and Ron Tidhar | Efficient Computation of Exact IRV Margins | 20 pages, 6 tables, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The margin of victory is easy to compute for many election schemes but
difficult for Instant Runoff Voting (IRV). This is important because arguments
about the correctness of an election outcome usually rely on the size of the
electoral margin. For example, risk-limiting audits require a knowledge of the
margin of victory in order to determine how much auditing is necessary. This
paper presents a practical branch-and-bound algorithm for exact IRV margin
computation that substantially improves on the current best-known approach.
Although exponential in the worst case, our algorithm runs efficiently in
practice on all the real examples we could find. We can efficiently discover
exact margins on election instances that cannot be solved by the current
state-of-the-art.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 05:56:53 GMT"
}
] | 1,440,115,200,000 | [
[
"Blom",
"Michelle",
""
],
[
"Stuckey",
"Peter J.",
""
],
[
"Teague",
"Vanessa J.",
""
],
[
"Tidhar",
"Ron",
""
]
] |
1508.04928 | Hiromi Narimatsu | Hiromi Narimatsu and Hiroyuki Kasai | Duration and Interval Hidden Markov Model for Sequential Data Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of sequential event data has been recognized as one of the essential
tools in data modeling and analysis field. In this paper, after the examination
of its technical requirements and issues to model complex but practical
situation, we propose a new sequential data model, dubbed Duration and Interval
Hidden Markov Model (DI-HMM), that efficiently represents "state duration" and
"state interval" of data events. This has significant implications to play an
important role in representing practical time-series sequential data. This
eventually provides an efficient and flexible sequential data retrieval.
Numerical experiments on synthetic and real data demonstrate the efficiency and
accuracy of the proposed DI-HMM.
| [
{
"version": "v1",
"created": "Thu, 20 Aug 2015 09:09:45 GMT"
}
] | 1,440,115,200,000 | [
[
"Narimatsu",
"Hiromi",
""
],
[
"Kasai",
"Hiroyuki",
""
]
] |
1508.05804 | Bernardo Gon\c{c}alves | Bernardo Gon\c{c}alves, Fabio Porto | A note on the complexity of the causal ordering problem | 25 pages, 4 figures | Artificial Intelligence 238:154-65, 2016 | 10.1016/j.artint.2016.06.004 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this note we provide a concise report on the complexity of the causal
ordering problem, originally introduced by Simon to reason about causal
dependencies implicit in systems of mathematical equations. We show that
Simon's classical algorithm to infer causal ordering is NP-Hard---an
intractability previously guessed but never proven. We present then a detailed
account based on Nayak's suggested algorithmic solution (the best available),
which is dominated by computing transitive closure---bounded in time by
$O(|\mathcal V|\cdot |\mathcal S|)$, where $\mathcal S(\mathcal E, \mathcal V)$
is the input system structure composed of a set $\mathcal E$ of equations over
a set $\mathcal V$ of variables with number of variable appearances (density)
$|\mathcal S|$. We also comment on the potential of causal ordering for
emerging applications in large-scale hypothesis management and analytics.
| [
{
"version": "v1",
"created": "Mon, 24 Aug 2015 13:56:32 GMT"
},
{
"version": "v2",
"created": "Mon, 13 Jun 2016 02:54:54 GMT"
}
] | 1,469,491,200,000 | [
[
"Gonçalves",
"Bernardo",
""
],
[
"Porto",
"Fabio",
""
]
] |
1508.06924 | Erik Mueller | Erik T. Mueller and Henry Minsky | Using Thought-Provoking Children's Questions to Drive Artificial
Intelligence Research | update for EGPAI 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose to use thought-provoking children's questions (TPCQs), namely
Highlights BrainPlay questions, as a new method to drive artificial
intelligence research and to evaluate the capabilities of general-purpose AI
systems. These questions are designed to stimulate thought and learning in
children, and they can be used to do the same thing in AI systems, while
demonstrating the system's reasoning capabilities to the evaluator. We
introduce the TPCQ task, which which takes a TPCQ question as input and
produces as output (1) answers to the question and (2) learned generalizations.
We discuss how BrainPlay questions stimulate learning. We analyze 244 BrainPlay
questions, and we report statistics on question type, question class, answer
cardinality, answer class, types of knowledge needed, and types of reasoning
needed. We find that BrainPlay questions span many aspects of intelligence.
Because the answers to BrainPlay questions and the generalizations learned from
them are often highly open-ended, we suggest using human judges for evaluation.
| [
{
"version": "v1",
"created": "Thu, 27 Aug 2015 16:23:49 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Sep 2015 13:01:00 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Jul 2017 00:34:24 GMT"
}
] | 1,501,113,600,000 | [
[
"Mueller",
"Erik T.",
""
],
[
"Minsky",
"Henry",
""
]
] |
1508.06973 | Catarina Moreira | Catarina Moreira and Andreas Wichert | The Relation Between Acausality and Interference in Quantum-Like
Bayesian Networks | In proceedings of the 9th International Conference on Quantum
Interactions, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse a quantum-like Bayesian Network that puts together cause/effect
relationships and semantic similarities between events. These semantic
similarities constitute acausal connections according to the Synchronicity
principle and provide new relationships to quantum like probabilistic graphical
models. As a consequence, beliefs (or any other event) can be represented in
vector spaces, in which quantum parameters are determined by the similarities
that these vectors share between them. Events attached by a semantic meaning do
not need to have an explanation in terms of cause and effect.
| [
{
"version": "v1",
"created": "Wed, 26 Aug 2015 17:37:01 GMT"
}
] | 1,440,720,000,000 | [
[
"Moreira",
"Catarina",
""
],
[
"Wichert",
"Andreas",
""
]
] |
1508.07092 | Saisai Ma | Saisai Ma, Jiuyong Li, Lin Liu, Thuc Duy Le | Mining Combined Causes in Large Data Sets | This paper has been accepted by Knowledge-Based Systems | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, many methods have been developed for detecting causal
relationships in observational data. Some of them have the potential to tackle
large data sets. However, these methods fail to discover a combined cause, i.e.
a multi-factor cause consisting of two or more component variables which
individually are not causes. A straightforward approach to uncovering a
combined cause is to include both individual and combined variables in the
causal discovery using existing methods, but this scheme is computationally
infeasible due to the huge number of combined variables. In this paper, we
propose a novel approach to address this practical causal discovery problem,
i.e. mining combined causes in large data sets. The experiments with both
synthetic and real world data sets show that the proposed method can obtain
high-quality causal discoveries with a high computational efficiency.
| [
{
"version": "v1",
"created": "Fri, 28 Aug 2015 04:42:23 GMT"
},
{
"version": "v2",
"created": "Thu, 15 Oct 2015 05:17:19 GMT"
}
] | 1,444,953,600,000 | [
[
"Ma",
"Saisai",
""
],
[
"Li",
"Jiuyong",
""
],
[
"Liu",
"Lin",
""
],
[
"Le",
"Thuc Duy",
""
]
] |
1509.00584 | Norbert B\'atfai Ph.D. | Norbert B\'atfai | Turing's Imitation Game has been Improved | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using the recently introduced universal computing model, called orchestrated
machine, that represents computations in a dissipative environment, we consider
a new kind of interpretation of Turing's Imitation Game. In addition we raise
the question whether the intelligence may show fractal properties. Then we
sketch a vision of what robotic cars are going to do in the future. Finally we
give the specification of an artificial life game based on the concept of
orchestrated machines. The purpose of this paper is to start the search for
possible relationships between these different topics.
| [
{
"version": "v1",
"created": "Wed, 2 Sep 2015 07:18:20 GMT"
}
] | 1,441,238,400,000 | [
[
"Bátfai",
"Norbert",
""
]
] |
1509.01379 | Balubaid Mohammed | Mohammed A. Balubaid and Umar Manzoor | Ontology Based SMS Controller for Smart Phones | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text analysis includes lexical analysis of the text and has been widely
studied and used in diverse applications. In the last decade, researchers have
proposed many efficient solutions to analyze / classify large text dataset,
however, analysis / classification of short text is still a challenge because
1) the data is very sparse 2) It contains noise words and 3) It is difficult to
understand the syntactical structure of the text. Short Messaging Service (SMS)
is a text messaging service for mobile/smart phone and this service is
frequently used by all mobile users. Because of the popularity of SMS service,
marketing companies nowadays are also using this service for direct marketing
also known as SMS marketing.In this paper, we have proposed Ontology based SMS
Controller which analyze the text message and classify it using ontology
aslegitimate or spam. The proposed system has been tested on different
scenarios and experimental results shows that the proposed solution is
effective both in terms of efficiency and time.
| [
{
"version": "v1",
"created": "Fri, 4 Sep 2015 09:29:47 GMT"
}
] | 1,441,584,000,000 | [
[
"Balubaid",
"Mohammed A.",
""
],
[
"Manzoor",
"Umar",
""
]
] |
1509.02012 | Fabio Patrizi | Giuseppe De Giacomo (1), Yves Lesp\'erance (2), Fabio Patrizi (3) ((1)
Sapienza University of Rome, Italy, (2) York University, Toronto, ON, Canada,
(3) Free University of Bozen-Bolzano, Italy) | Bounded Situation Calculus Action Theories | 51 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we investigate bounded action theories in the situation
calculus. A bounded action theory is one which entails that, in every
situation, the number of object tuples in the extension of fluents is bounded
by a given constant, although such extensions are in general different across
the infinitely many situations. We argue that such theories are common in
applications, either because facts do not persist indefinitely or because the
agent eventually forgets some facts, as new ones are learnt. We discuss various
classes of bounded action theories. Then we show that verification of a
powerful first-order variant of the mu-calculus is decidable for such theories.
Notably, this variant supports a controlled form of quantification across
situations. We also show that through verification, we can actually check
whether an arbitrary action theory maintains boundedness.
| [
{
"version": "v1",
"created": "Mon, 7 Sep 2015 12:42:45 GMT"
}
] | 1,441,670,400,000 | [
[
"De Giacomo",
"Giuseppe",
""
],
[
"Lespérance",
"Yves",
""
],
[
"Patrizi",
"Fabio",
""
]
] |
1509.02384 | Anand Subramanian D.Sc. | Arthur Kramer, Anand Subramanian | A unified heuristic and an annotated bibliography for a large class of
earliness-tardiness scheduling problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes a unified heuristic algorithm for a large class of
earliness-tardiness (E-T) scheduling problems. We consider single/parallel
machine E-T problems that may or may not consider some additional features such
as idle time, setup times and release dates. In addition, we also consider
those problems whose objective is to minimize either the total (average)
weighted completion time or the total (average) weighted flow time, which arise
as particular cases when the due dates of all jobs are either set to zero or to
their associated release dates, respectively. The developed local search based
metaheuristic framework is quite simple, but at the same time relies on
sophisticated procedures for efficiently performing local search according to
the characteristics of the problem. We present efficient move evaluation
approaches for some parallel machine problems that generalize the existing ones
for single machine problems. The algorithm was tested in hundreds of instances
of several E-T problems and particular cases. The results obtained show that
our unified heuristic is capable of producing high quality solutions when
compared to the best ones available in the literature that were obtained by
specific methods. Moreover, we provide an extensive annotated bibliography on
the problems related to those considered in this work, where we not only
indicate the approach(es) used in each publication, but we also point out the
characteristics of the problem(s) considered. Beyond that, we classify the
existing methods in different categories so as to have a better idea of the
popularity of each type of solution procedure.
| [
{
"version": "v1",
"created": "Tue, 8 Sep 2015 14:26:31 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Aug 2016 16:43:51 GMT"
},
{
"version": "v3",
"created": "Tue, 10 Jan 2017 17:12:00 GMT"
}
] | 1,484,092,800,000 | [
[
"Kramer",
"Arthur",
""
],
[
"Subramanian",
"Anand",
""
]
] |
1509.02413 | Yanping Huang | Yanping Huang | Learning Efficient Representations for Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Markov decision processes (MDPs) are a well studied framework for solving
sequential decision making problems under uncertainty. Exact methods for
solving MDPs based on dynamic programming such as policy iteration and value
iteration are effective on small problems. In problems with a large discrete
state space or with continuous state spaces, a compact representation is
essential for providing an efficient approximation solutions to MDPs. Commonly
used approximation algorithms involving constructing basis functions for
projecting the value function onto a low dimensional subspace, and building a
factored or hierarchical graphical model to decompose the transition and reward
functions. However, hand-coding a good compact representation for a given
reinforcement learning (RL) task can be quite difficult and time consuming.
Recent approaches have attempted to automatically discover efficient
representations for RL.
In this thesis proposal, we discuss the problems of automatically
constructing structured kernel for kernel based RL, a popular approach to
learning non-parametric approximations for value function. We explore a space
of kernel structures which are built compositionally from base kernels using a
context-free grammar. We examine a greedy algorithm for searching over the
structure space. To demonstrate how the learned structure can represent and
approximate the original RL problem in terms of compactness and efficiency, we
plan to evaluate our method on a synthetic problem and compare it to other RL
baselines.
| [
{
"version": "v1",
"created": "Fri, 28 Aug 2015 06:01:56 GMT"
}
] | 1,441,756,800,000 | [
[
"Huang",
"Yanping",
""
]
] |
1509.02709 | Tom Everitt | Tom Everitt, Marcus Hutter | A Topological Approach to Meta-heuristics: Analytical Results on the BFS
vs. DFS Algorithm Selection Problem | Main results published in 28th Australian Joint Conference on
Artificial Intelligence, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Search is a central problem in artificial intelligence, and breadth-first
search (BFS) and depth-first search (DFS) are the two most fundamental ways to
search. In this paper we derive estimates for average BFS and DFS runtime. The
average runtime estimates can be used to allocate resources or judge the
hardness of a problem. They can also be used for selecting the best graph
representation, and for selecting the faster algorithm out of BFS and DFS. They
may also form the basis for an analysis of more advanced search methods. The
paper treats both tree search and graph search. For tree search, we employ a
probabilistic model of goal distribution; for graph search, the analysis
depends on an additional statistic of path redundancy and average branching
factor. As an application, we use the results to predict BFS and DFS runtime on
two concrete grammar problems and on the N-puzzle. Experimental verification
shows that our analytical approximations come close to empirical reality.
| [
{
"version": "v1",
"created": "Wed, 9 Sep 2015 10:30:48 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Apr 2018 06:40:49 GMT"
}
] | 1,523,577,600,000 | [
[
"Everitt",
"Tom",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1509.03247 | Arindam Chaudhuri AC | Arindam Chaudhuri | An Epsilon Hierarchical Fuzzy Twin Support Vector Regression | Research work at Samsung Research and Development Institute Delhi | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The research presents epsilon hierarchical fuzzy twin support vector
regression based on epsilon fuzzy twin support vector regression and epsilon
twin support vector regression. Epsilon FTSVR is achieved by incorporating
trapezoidal fuzzy numbers to epsilon TSVR which takes care of uncertainty
existing in forecasting problems. Epsilon FTSVR determines a pair of epsilon
insensitive proximal functions by solving two related quadratic programming
problems. The structural risk minimization principle is implemented by
introducing regularization term in primal problems of epsilon FTSVR. This
yields dual stable positive definite problems which improves regression
performance. Epsilon FTSVR is then reformulated as epsilon HFTSVR consisting of
a set of hierarchical layers each containing epsilon FTSVR. Experimental
results on both synthetic and real datasets reveal that epsilon HFTSVR has
remarkable generalization performance with minimum training time.
| [
{
"version": "v1",
"created": "Thu, 10 Sep 2015 17:37:20 GMT"
}
] | 1,441,929,600,000 | [
[
"Chaudhuri",
"Arindam",
""
]
] |
1509.03390 | Robert Sloan | Stellan Ohlsson, Robert H. Sloan, Gy\"orgy Tur\'an, Aaron Urasky | Measuring an Artificial Intelligence System's Performance on a Verbal IQ
Test For Young Children | 17 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We administered the Verbal IQ (VIQ) part of the Wechsler Preschool and
Primary Scale of Intelligence (WPPSI-III) to the ConceptNet 4 AI system. The
test questions (e.g., "Why do we shake hands?") were translated into ConceptNet
4 inputs using a combination of the simple natural language processing tools
that come with ConceptNet together with short Python programs that we wrote.
The question answering used a version of ConceptNet based on spectral methods.
The ConceptNet system scored a WPPSI-III VIQ that is average for a
four-year-old child, but below average for 5 to 7 year-olds. Large variations
among subtests indicate potential areas of improvement. In particular, results
were strongest for the Vocabulary and Similarities subtests, intermediate for
the Information subtest, and lowest for the Comprehension and Word Reasoning
subtests. Comprehension is the subtest most strongly associated with common
sense. The large variations among subtests and ordinary common sense strongly
suggest that the WPPSI-III VIQ results do not show that "ConceptNet has the
verbal abilities a four-year-old." Rather, children's IQ tests offer one
objective metric for the evaluation and comparison of AI systems. Also, this
work continues previous research on Psychometric AI.
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 05:14:51 GMT"
}
] | 1,442,188,800,000 | [
[
"Ohlsson",
"Stellan",
""
],
[
"Sloan",
"Robert H.",
""
],
[
"Turán",
"György",
""
],
[
"Urasky",
"Aaron",
""
]
] |
1509.03527 | Thibault Gauthier | Thibault Gauthier, Cezary Kaliszyk | Sharing HOL4 and HOL Light proof knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New proof assistant developments often involve concepts similar to already
formalized ones. When proving their properties, a human can often take
inspiration from the existing formalized proofs available in other provers or
libraries. In this paper we propose and evaluate a number of methods, which
strengthen proof automation by learning from proof libraries of different
provers. Certain conjectures can be proved directly from the dependencies
induced by similar proofs in the other library. Even if exact correspondences
are not found, learning-reasoning systems can make use of the association
between proved theorems and their characteristics to predict the relevant
premises. Such external help can be further combined with internal advice. We
evaluate the proposed knowledge-sharing methods by reproving the HOL Light and
HOL4 standard libraries. The learning-reasoning system HOL(y)Hammer, whose
single best strategy could automatically find proofs for 30% of the HOL Light
problems, can prove 40% with the knowledge from HOL4.
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 14:18:04 GMT"
}
] | 1,442,188,800,000 | [
[
"Gauthier",
"Thibault",
""
],
[
"Kaliszyk",
"Cezary",
""
]
] |
1509.03534 | Thibault Gauthier | Thibault Gauthier, Cezary Kaliszyk | Premise Selection and External Provers for HOL4 | null | null | 10.1145/2676724.2693173 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning-assisted automated reasoning has recently gained popularity among
the users of Isabelle/HOL, HOL Light, and Mizar. In this paper, we present an
add-on to the HOL4 proof assistant and an adaptation of the HOLyHammer system
that provides machine learning-based premise selection and automated reasoning
also for HOL4. We efficiently record the HOL4 dependencies and extract features
from the theorem statements, which form a basis for premise selection.
HOLyHammer transforms the HOL4 statements in the various TPTP-ATP proof
formats, which are then processed by the ATPs. We discuss the different
evaluation settings: ATPs, accessible lemmas, and premise numbers. We measure
the performance of HOLyHammer on the HOL4 standard library. The results are
combined accordingly and compared with the HOL Light experiments, showing a
comparably high quality of predictions. The system directly benefits HOL4 users
by automatically finding proofs dependencies that can be reconstructed by
Metis.
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 14:31:05 GMT"
}
] | 1,442,188,800,000 | [
[
"Gauthier",
"Thibault",
""
],
[
"Kaliszyk",
"Cezary",
""
]
] |
1509.03564 | Brian Ruttenberg | Avi Pfeffer, Brian Ruttenberg, Amy Sliva, Michael Howard, Glenn Takata | Lazy Factored Inference for Functional Probabilistic Programming | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic programming provides the means to represent and reason about
complex probabilistic models using programming language constructs. Even simple
probabilistic programs can produce models with infinitely many variables.
Factored inference algorithms are widely used for probabilistic graphical
models, but cannot be applied to these programs because all the variables and
factors have to be enumerated. In this paper, we present a new inference
framework, lazy factored inference (LFI), that enables factored algorithms to
be used for models with infinitely many variables. LFI expands the model to a
bounded depth and uses the structure of the program to precisely quantify the
effect of the unexpanded part of the model, producing lower and upper bounds to
the probability of the query.
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 15:45:39 GMT"
}
] | 1,442,188,800,000 | [
[
"Pfeffer",
"Avi",
""
],
[
"Ruttenberg",
"Brian",
""
],
[
"Sliva",
"Amy",
""
],
[
"Howard",
"Michael",
""
],
[
"Takata",
"Glenn",
""
]
] |
1509.03585 | Fuan Pu | Fuan Pu, Jian Luo and Guiming Luo | Some Supplementaries to The Counting Semantics for Abstract
Argumentation | 8 pages, 3 figures, ICTAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dung's abstract argumentation framework consists of a set of interacting
arguments and a series of semantics for evaluating them. Those semantics
partition the powerset of the set of arguments into two classes: extensions and
non-extensions. In order to reason with a specific semantics, one needs to take
a credulous or skeptical approach, i.e. an argument is eventually accepted, if
it is accepted in one or all extensions, respectively. In our previous work
\cite{ref-pu2015counting}, we have proposed a novel semantics, called
\emph{counting semantics}, which allows for a more fine-grained assessment to
arguments by counting the number of their respective attackers and defenders
based on argument graph and argument game. In this paper, we continue our
previous work by presenting some supplementaries about how to choose the
damaging factor for the counting semantics, and what relationships with some
existing approaches, such as Dung's classical semantics, generic gradual
valuations. Lastly, an axiomatic perspective on the ranking semantics induced
by our counting semantics are presented.
| [
{
"version": "v1",
"created": "Fri, 11 Sep 2015 17:23:54 GMT"
}
] | 1,442,188,800,000 | [
[
"Pu",
"Fuan",
""
],
[
"Luo",
"Jian",
""
],
[
"Luo",
"Guiming",
""
]
] |
1509.04064 | Michael Castronovo | Michael Castronovo, Damien Ernst, Adrien Couetoux, Raphael Fonteneau | Benchmarking for Bayesian Reinforcement Learning | 37 pages | null | 10.1371/journal.pone.0157088 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the Bayesian Reinforcement Learning (BRL) setting, agents try to maximise
the collected rewards while interacting with their environment while using some
prior knowledge that is accessed beforehand. Many BRL algorithms have already
been proposed, but even though a few toy examples exist in the literature,
there are still no extensive or rigorous benchmarks to compare them. The paper
addresses this problem, and provides a new BRL comparison methodology along
with the corresponding open source library. In this methodology, a comparison
criterion that measures the performance of algorithms on large sets of Markov
Decision Processes (MDPs) drawn from some probability distributions is defined.
In order to enable the comparison of non-anytime algorithms, our methodology
also includes a detailed analysis of the computation time requirement of each
algorithm. Our library is released with all source code and documentation: it
includes three test problems, each of which has two different prior
distributions, and seven state-of-the-art RL algorithms. Finally, our library
is illustrated by comparing all the available algorithms and the results are
discussed.
| [
{
"version": "v1",
"created": "Mon, 14 Sep 2015 12:47:52 GMT"
}
] | 1,475,020,800,000 | [
[
"Castronovo",
"Michael",
""
],
[
"Ernst",
"Damien",
""
],
[
"Couetoux",
"Adrien",
""
],
[
"Fonteneau",
"Raphael",
""
]
] |
1509.04904 | Tshilidzi Marwala | Pramod Kumar Parida, Tshilidzi Marwala and Snehashish Chakraverty | Causal Model Analysis using Collider v-structure with Negative
Percentage Mapping | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major problem of causal inference is the arrangement of dependent nodes in
a directed acyclic graph (DAG) with path coefficients and observed confounders.
Path coefficients do not provide the units to measure the strength of
information flowing from one node to the other. Here we proposed the method of
causal structure learning using collider v-structures (CVS) with Negative
Percentage Mapping (NPM) to get selective thresholds of information strength,
to direct the edges and subjective confounders in a DAG. The NPM is used to
scale the strength of information passed through nodes in units of percentage
from interval from 0 to 1. The causal structures are constructed by bottom up
approach using path coefficients, causal directions and confounders, derived
implementing collider v-structure and NPM. The method is self-sufficient to
observe all the latent confounders present in the causal model and capable of
detecting every responsible causal direction. The results are tested for
simulated datasets of non-Gaussian distributions and compared with DirectLiNGAM
and ICA-LiNGAM to check efficiency of the proposed method.
| [
{
"version": "v1",
"created": "Wed, 16 Sep 2015 12:37:30 GMT"
}
] | 1,442,448,000,000 | [
[
"Parida",
"Pramod Kumar",
""
],
[
"Marwala",
"Tshilidzi",
""
],
[
"Chakraverty",
"Snehashish",
""
]
] |
1509.05434 | Thabet Slimani | Thabet Slimani | A Study Investigating Typical Concepts and Guidelines for Ontology
Building | 8 pages, 2 figures | Journal of Emerging Trends in Computing and Information
Sciences.Vol. 5, No. 12 December 2014, ISSN 2079-8407, pp.886-893 | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | In semantic technologies, the shared common understanding of the structure of
information among artifacts (people or software agents) can be realized by
building an ontology. To do this, it is imperative for an ontology builder to
answer several questions: a) what are the main components of an ontology? b)
How an ontology look likes and how it works? c) Verify if it is required to
consider reusing existing ontologies or not? c) What is the complexity of the
ontology to be developed? d) What are the principles of ontology design and
development? e) How to evaluate an ontology? This paper answers all the key
questions above. The aim of this paper is to present a set of guiding
principles to help ontology developers and also inexperienced users to answer
such questions.
| [
{
"version": "v1",
"created": "Thu, 17 Sep 2015 20:27:31 GMT"
}
] | 1,442,793,600,000 | [
[
"Slimani",
"Thabet",
""
]
] |
1509.06731 | Liangliang Cao | Nikolai Yakovenko, Liangliang Cao, Colin Raffel and James Fan | Poker-CNN: A Pattern Learning Strategy for Making Draws and Bets in
Poker Games | 8 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Poker is a family of card games that includes many variations. We hypothesize
that most poker games can be solved as a pattern matching problem, and propose
creating a strong poker playing system based on a unified poker representation.
Our poker player learns through iterative self-play, and improves its
understanding of the game by training on the results of its previous actions
without sophisticated domain knowledge. We evaluate our system on three poker
games: single player video poker, two-player Limit Texas Hold'em, and finally
two-player 2-7 triple draw poker. We show that our model can quickly learn
patterns in these very different poker games while it improves from zero
knowledge to a competitive player against human experts.
The contributions of this paper include: (1) a novel representation for poker
games, extendable to different poker variations, (2) a CNN based learning model
that can effectively learn the patterns in three different games, and (3) a
self-trained system that significantly beats the heuristic-based program on
which it is trained, and our system is competitive against human expert
players.
| [
{
"version": "v1",
"created": "Tue, 22 Sep 2015 19:05:39 GMT"
}
] | 1,442,966,400,000 | [
[
"Yakovenko",
"Nikolai",
""
],
[
"Cao",
"Liangliang",
""
],
[
"Raffel",
"Colin",
""
],
[
"Fan",
"James",
""
]
] |
1509.07582 | George Konidaris | George Konidaris | Constructing Abstraction Hierarchies Using a Skill-Symbol Loop | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe a framework for building abstraction hierarchies whereby an agent
alternates skill- and representation-acquisition phases to construct a sequence
of increasingly abstract Markov decision processes. Our formulation builds on
recent results showing that the appropriate abstract representation of a
problem is specified by the agent's skills. We describe how such a hierarchy
can be used for fast planning, and illustrate the construction of an
appropriate hierarchy for the Taxi domain.
| [
{
"version": "v1",
"created": "Fri, 25 Sep 2015 04:07:22 GMT"
}
] | 1,443,398,400,000 | [
[
"Konidaris",
"George",
""
]
] |
1509.08434 | Sayyed Ali Mirsoleimani | S. Ali Mirsoleimani, Aske Plaat and Jaap van den Herik | Ensemble UCT Needs High Exploitation | 7 pages, 7 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent results have shown that the MCTS algorithm (a new, adaptive,
randomized optimization algorithm) is effective in a remarkably diverse set of
applications in Artificial Intelligence, Operations Research, and High Energy
Physics. MCTS can find good solutions without domain dependent heuristics,
using the UCT formula to balance exploitation and exploration. It has been
suggested that the optimum in the exploitation- exploration balance differs for
different search tree sizes: small search trees needs more exploitation; large
search trees need more exploration. Small search trees occur in variations of
MCTS, such as parallel and ensemble approaches. This paper investigates the
possibility of improving the performance of Ensemble UCT by increasing the
level of exploitation. As the search trees becomes smaller we achieve an
improved performance. The results are important for improving the performance
of large scale parallelism of MCTS.
| [
{
"version": "v1",
"created": "Mon, 28 Sep 2015 19:14:43 GMT"
}
] | 1,443,484,800,000 | [
[
"Mirsoleimani",
"S. Ali",
""
],
[
"Plaat",
"Aske",
""
],
[
"Herik",
"Jaap van den",
""
]
] |
1509.08764 | Quang Minh Ha | Quang Minh Ha, Yves Deville, Quang Dung Pham, Minh Ho\`ang H\`a | On the Min-cost Traveling Salesman Problem with Drone | 57 pages, technical report, latest work | null | 10.1016/j.trc.2017.11.015 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past few years, unmanned aerial vehicles (UAV), also known as
drones, have been adopted as part of a new logistic method in the commercial
sector called "last-mile delivery". In this novel approach, they are deployed
alongside trucks to deliver goods to customers to improve the quality of
service and reduce the transportation cost. This approach gives rise to a new
variant of the traveling salesman problem (TSP), called TSP with drone (TSP-D).
A variant of this problem that aims to minimize the time at which truck and
drone finish the service (or, in other words, to maximize the quality of
service) was studied in the work of Murray and Chu (2015). In contrast, this
paper considers a new variant of TSP-D in which the objective is to minimize
operational costs including total transportation cost and one created by waste
time a vehicle has to wait for the other. The problem is first formulated
mathematically. Then, two algorithms are proposed for the solution. The first
algorithm (TSP-LS) was adapted from the approach proposed by Murray and Chu
(2015), in which an optimal TSP solution is converted to a feasible TSP-D
solution by local searches. The second algorithm, a Greedy Randomized Adaptive
Search Procedure (GRASP), is based on a new split procedure that optimally
splits any TSP tour into a TSP-D solution. After a TSP-D solution has been
generated, it is then improved through local search operators. Numerical
results obtained on various instances of both objective functions with
different sizes and characteristics are presented. The results show that GRASP
outperforms TSP-LS in terms of solution quality under an acceptable running
time.
| [
{
"version": "v1",
"created": "Tue, 29 Sep 2015 14:19:47 GMT"
},
{
"version": "v2",
"created": "Mon, 23 May 2016 06:58:52 GMT"
},
{
"version": "v3",
"created": "Sat, 29 Jul 2017 18:08:35 GMT"
}
] | 1,514,937,600,000 | [
[
"Ha",
"Quang Minh",
""
],
[
"Deville",
"Yves",
""
],
[
"Pham",
"Quang Dung",
""
],
[
"Hà",
"Minh Hoàng",
""
]
] |
1509.08792 | Sergio Consoli | Sergio Consoli, Jos\`e Andr\`es Moreno P\`erez | An intelligent extension of Variable Neighbourhood Search for labelling
graph problems | MIC 2015: The XI Metaheuristics International Conference, 3 pages,
Agadir, June 7-10, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we describe an extension of the Variable Neighbourhood Search
(VNS) which integrates the basic VNS with other complementary approaches from
machine learning, statistics and experimental algorithmic, in order to produce
high-quality performance and to completely automate the resulting optimization
strategy. The resulting intelligent VNS has been successfully applied to a
couple of optimization problems where the solution space consists of the
subsets of a finite reference set. These problems are the labelled spanning
tree and forest problems that are formulated on an undirected labelled graph; a
graph where each edge has a label in a finite set of labels L. The problems
consist on selecting the subset of labels such that the subgraph generated by
these labels has an optimal spanning tree or forest, respectively. These
problems have several applications in the real-world, where one aims to ensure
connectivity by means of homogeneous connections.
| [
{
"version": "v1",
"created": "Sun, 27 Sep 2015 22:12:42 GMT"
}
] | 1,443,571,200,000 | [
[
"Consoli",
"Sergio",
""
],
[
"Pèrez",
"Josè Andrès Moreno",
""
]
] |
1509.08891 | Hao Wu | Hao Wu | The Computational Principles of Learning Ability | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | It has been quite a long time since AI researchers in the field of computer
science stop talking about simulating human intelligence or trying to explain
how brain works. Recently, represented by deep learning techniques, the field
of machine learning is experiencing unprecedented prosperity and some
applications with near human-level performance bring researchers confidence to
imply that their approaches are the promising candidate for understanding the
mechanism of human brain. However apart from several ancient philological
criteria and some imaginary black box tests (Turing test, Chinese room) there
is no computational level explanation, definition or criteria about
intelligence or any of its components. Base on the common sense that learning
ability is one critical component of intelligence and inspect from the
viewpoint of mapping relations, this paper presents two laws which explains
what is the "learning ability" as we familiar with and under what conditions a
mapping relation can be acknowledged as "Learning Model".
| [
{
"version": "v1",
"created": "Wed, 23 Sep 2015 04:25:44 GMT"
}
] | 1,443,571,200,000 | [
[
"Wu",
"Hao",
""
]
] |
1509.09240 | Chu Luo | Chu Luo | Solving a Mathematical Problem in Square War: a Go-like Board Game | 8 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a board game: Square War. The game definition of
Square War is similar to the classic Chinese board game Go. Then we propose a
mathematical problem of the game Square War. Finally, we show that the problem
can be solved by using a method of mixed mathematics and computer science.
| [
{
"version": "v1",
"created": "Sun, 26 Jul 2015 08:09:24 GMT"
},
{
"version": "v2",
"created": "Sun, 29 Nov 2015 09:15:36 GMT"
}
] | 1,448,928,000,000 | [
[
"Luo",
"Chu",
""
]
] |
1510.00604 | Laura Steinert | Laura Steinert, Jens Hoefinghoff, Josef Pauli | Online Vision- and Action-Based Object Classification Using Both
Symbolic and Subsymbolic Knowledge Representations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If a robot is supposed to roam an environment and interact with objects, it
is often necessary to know all possible objects in advance, so that a database
with models of all objects can be generated for visual identification. However,
this constraint cannot always be fulfilled. Due to that reason, a model based
object recognition cannot be used to guide the robot's interactions. Therefore,
this paper proposes a system that analyzes features of encountered objects and
then uses these features to compare unknown objects to already known ones. From
the resulting similarity appropriate actions can be derived. Moreover, the
system enables the robot to learn object categories by grouping similar objects
or by splitting existing categories. To represent the knowledge a hybrid form
is used, consisting of both symbolic and subsymbolic representations.
| [
{
"version": "v1",
"created": "Fri, 2 Oct 2015 14:08:36 GMT"
}
] | 1,444,003,200,000 | [
[
"Steinert",
"Laura",
""
],
[
"Hoefinghoff",
"Jens",
""
],
[
"Pauli",
"Josef",
""
]
] |
1510.01291 | Samuel Kounaves | Dongping Fang, Elizabeth Oberlin, Wei Ding, Samuel P. Kounaves | A Common-Factor Approach for Multivariate Data Cleaning with an
Application to Mars Phoenix Mission Data | 12 pages, 10 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data quality is fundamentally important to ensure the reliability of data for
stakeholders to make decisions. In real world applications, such as scientific
exploration of extreme environments, it is unrealistic to require raw data
collected to be perfect. As data miners, when it is infeasible to physically
know the why and the how in order to clean up the data, we propose to seek the
intrinsic structure of the signal to identify the common factors of
multivariate data. Using our new data driven learning method, the common-factor
data cleaning approach, we address an interdisciplinary challenge on
multivariate data cleaning when complex external impacts appear to interfere
with multiple data measurements. Existing data analyses typically process one
signal measurement at a time without considering the associations among all
signals. We analyze all signal measurements simultaneously to find the hidden
common factors that drive all measurements to vary together, but not as a
result of the true data measurements. We use common factors to reduce the
variations in the data without changing the base mean level of the data to
avoid altering the physical meaning.
| [
{
"version": "v1",
"created": "Mon, 5 Oct 2015 19:21:22 GMT"
},
{
"version": "v2",
"created": "Wed, 7 Oct 2015 16:47:30 GMT"
}
] | 1,444,262,400,000 | [
[
"Fang",
"Dongping",
""
],
[
"Oberlin",
"Elizabeth",
""
],
[
"Ding",
"Wei",
""
],
[
"Kounaves",
"Samuel P.",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.