id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1510.01599 | Marco Maratea | Remi Brochenin and Yuliya Lierler and Marco Maratea | Disjunctive Answer Set Solvers via Templates | To appear in Theory and Practice of Logic Programming (TPLP) | Theory and Practice of Logic Programming 16 (2016) 465-497 | 10.1017/S1471068415000411 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer set programming is a declarative programming paradigm oriented towards
difficult combinatorial search problems. A fundamental task in answer set
programming is to compute stable models, i.e., solutions of logic programs.
Answer set solvers are the programs that perform this task. The problem of
deciding whether a disjunctive program has a stable model is
$\Sigma^P_2$-complete. The high complexity of reasoning within disjunctive
logic programming is responsible for few solvers capable of dealing with such
programs, namely DLV, GnT, Cmodels, CLASP and WASP. In this paper we show that
transition systems introduced by Nieuwenhuis, Oliveras, and Tinelli to model
and analyze satisfiability solvers can be adapted for disjunctive answer set
solvers. Transition systems give a unifying perspective and bring clarity in
the description and comparison of solvers. They can be effectively used for
analyzing, comparing and proving correctness of search algorithms as well as
inspiring new ideas in the design of disjunctive answer set solvers. In this
light, we introduce a general template, which accounts for major techniques
implemented in disjunctive solvers. We then illustrate how this general
template captures solvers DLV, GnT and Cmodels. We also show how this framework
provides a convenient tool for designing new solving algorithms by means of
combinations of techniques employed in different solvers.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 14:42:38 GMT"
}
] | 1,582,070,400,000 | [
[
"Brochenin",
"Remi",
""
],
[
"Lierler",
"Yuliya",
""
],
[
"Maratea",
"Marco",
""
]
] |
1510.01659 | Fahad Muhammad | Muhammad Fahad | DKP-AOM: results for OAEI 2015 | 8 pages, 3 figures, 3 tables, initial results of OM workshop,
Ontology Matching Workshop 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present the results obtained by our DKP-AOM system within
the OAEI 2015 campaign. DKP-AOM is an ontology merging tool designed to merge
heterogeneous ontologies. In OAEI, we have participated with its ontology
mapping component which serves as a basic module capable of matching large
scale ontologies before their merging. This is our first successful
participation in the Conference, OA4QA and Anatomy track of OAEI. DKP-AOM is
participating with two versions (DKP-AOM and DKP-AOM_lite), DKP-AOM performs
coherence analysis. In OA4QA track, DKPAOM out-performed in the evaluation and
generated accurate alignments allowed to answer all the queries of the
evaluation. We can also see its competitive results for the conference track in
the evaluation initiative among other reputed systems. In the anatomy track, it
has produced alignments within an allocated time and appeared in the list of
systems which produce coherent results. Finally, we discuss some future work
towards the development of DKP-AOM.
| [
{
"version": "v1",
"created": "Tue, 6 Oct 2015 16:48:24 GMT"
}
] | 1,444,176,000,000 | [
[
"Fahad",
"Muhammad",
""
]
] |
1510.02828 | Mauricio Toro | Mauricio Toro and Camilo Rueda and Carlos Ag\'on and G\'erard Assayag | Gelisp: A Library to Represent Musical CSPs and Search Strategies | 7 pages, 2 figures, not published | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present Gelisp, a new library to represent musical
Constraint Satisfaction Problems and search strategies intuitively. Gelisp has
two interfaces, a command-line one for Common Lisp and a graphical one for
OpenMusic. Using Gelisp, we solved a problem of automatic music generation
proposed by composer Michael Jarrell and we found solutions for the
All-interval series.
| [
{
"version": "v1",
"created": "Fri, 9 Oct 2015 21:32:13 GMT"
}
] | 1,444,694,400,000 | [
[
"Toro",
"Mauricio",
""
],
[
"Rueda",
"Camilo",
""
],
[
"Agón",
"Carlos",
""
],
[
"Assayag",
"Gérard",
""
]
] |
1510.02867 | Tshilidzi Marwala | Tshilidzi Marwala and Evan Hurwitz | Artificial Intelligence and Asymmetric Information Theory | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When human agents come together to make decisions, it is often the case that
one human agent has more information than the other. This phenomenon is called
information asymmetry and this distorts the market. Often if one human agent
intends to manipulate a decision in its favor the human agent can signal wrong
or right information. Alternatively, one human agent can screen for information
to reduce the impact of asymmetric information on decisions. With the advent of
artificial intelligence, signaling and screening have been made easier. This
paper studies the impact of artificial intelligence on the theory of asymmetric
information. It is surmised that artificial intelligent agents reduce the
degree of information asymmetry and thus the market where these agents are
deployed become more efficient. It is also postulated that the more artificial
intelligent agents there are deployed in the market the less is the volume of
trades in the market. This is because for many trades to happen the asymmetry
of information on goods and services to be traded should exist, creating a
sense of arbitrage.
| [
{
"version": "v1",
"created": "Sat, 10 Oct 2015 03:07:10 GMT"
},
{
"version": "v2",
"created": "Tue, 13 Oct 2015 04:06:04 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Oct 2015 15:38:31 GMT"
}
] | 1,444,867,200,000 | [
[
"Marwala",
"Tshilidzi",
""
],
[
"Hurwitz",
"Evan",
""
]
] |
1510.03179 | Adrian Groza | Adrian Groza | Data structuring for the ontological modelling of wind energy systems | th Int. Conf. on Modelling and Development of Intelligent Systems
(MDIS2015), Sibiu, Romania, 28 Oct. - 1 Nov. 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small wind projects encounter difficulties to be efficiently deployed, partly
because wrong way data and information are managed. Ontologies can overcome the
drawbacks of partially available, noisy, inconsistent, and heterogeneous data
sources, by providing a semantic middleware between low level data and more
general knowledge. In this paper, we engineer an ontology for the wind energy
domain using description logic as technical instrumentation. We aim to
integrate corpus of heterogeneous knowledge, both digital and human, in order
to help the interested user to speed-up the initialization of a small-scale
wind project. We exemplify one use case scenario of our ontology, that consists
of automatically checking whether a planned wind project is compliant or not
with the active regulations.
| [
{
"version": "v1",
"created": "Mon, 12 Oct 2015 08:23:28 GMT"
}
] | 1,444,694,400,000 | [
[
"Groza",
"Adrian",
""
]
] |
1510.03592 | Stefano Rosati | Mattia Carpin, Stefano Rosati, Mohammad Emtiyaz Khan, and Bixio
Rimoldi | UAVs using Bayesian Optimization to Locate WiFi Devices | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of localizing non-collaborative WiFi devices in a
large region. Our main motive is to localize humans by localizing their WiFi
devices, e.g. during search-and-rescue operations after a natural disaster. We
use an active sensing approach that relies on Unmanned Aerial Vehicles (UAVs)
to collect signal-strength measurements at informative locations. The problem
is challenging since the measurement is received at arbitrary times and they
are received only when the UAV is in close proximity to the device. For these
reasons, it is extremely important to make prudent decision with very few
measurements. We use the Bayesian optimization approach based on Gaussian
process (GP) regression. This approach works well for our application since GPs
give reliable predictions with very few measurements while Bayesian
optimization makes a judicious trade-off between exploration and exploitation.
In field experiments conducted over a region of 1000 $\times$ 1000 $m^2$, we
show that our approach reduces the search area to less than 100 meters around
the WiFi device within 5 minutes only. Overall, our approach localizes the
device in less than 15 minutes with an error of less than 20 meters.
| [
{
"version": "v1",
"created": "Tue, 13 Oct 2015 09:30:11 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Oct 2015 12:00:00 GMT"
}
] | 1,444,867,200,000 | [
[
"Carpin",
"Mattia",
""
],
[
"Rosati",
"Stefano",
""
],
[
"Khan",
"Mohammad Emtiyaz",
""
],
[
"Rimoldi",
"Bixio",
""
]
] |
1510.04183 | Dmytro Terletskyi | D.O. Terletskyi, O.I. Provotar | Mathematical Foundations for Designing and Development of Intelligent
Systems of Information Analysis | null | Problems in Programming, 2014, Vol. 16, No.2-3, pp. 233-241 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article is an attempt to combine different ways of working with sets of
objects and their classes for designing and development of artificial
intelligent systems (AIS) of analysis information, using object-oriented
programming (OOP). This paper contains analysis of basic concepts of OOP and
their relation with set theory and artificial intelligence (AI). Process of
sets and multisets creation from different sides, in particular mathematical
set theory, OOP and AI is considered. Definition of object and its properties,
homogeneous and inhomogeneous classes of objects, set of objects, multiset of
objects and constructive methods of their creation and classification are
proposed. In addition, necessity of some extension of existing OOP tools for
the purpose of practical implementation AIS of analysis information, using
proposed approach, is shown.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 16:09:43 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Feb 2020 17:41:06 GMT"
}
] | 1,582,502,400,000 | [
[
"Terletskyi",
"D. O.",
""
],
[
"Provotar",
"O. I.",
""
]
] |
1510.04188 | Dmytro Terletskyi | Dmytro Terletskyi | Universal and Determined Constructors of Multisets of Objects | arXiv admin note: text overlap with arXiv:1510.04183 | Information Theories and Applications, Vol. 21, Number 4, 2014,
pp. 339-361 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains analysis of creation of sets and multisets as an approach
for modeling of some aspects of human thinking. The creation of sets is
considered within constructive object-oriented version of set theory (COOST),
from different sides, in particular classical set theory, object-oriented
programming (OOP) and development of intelligent information systems (IIS). The
main feature of COOST in contrast to other versions of set theory is an
opportunity to describe essences of objects more precisely, using their
properties and methods, which can be applied to them. That is why this version
of set theory is object-oriented and close to OOP. Within COOST, the author
proposes universal constructor of multisets of objects that gives us a
possibility to create arbitrary multisets of objects. In addition, a few
determined constructors of multisets of objects, which allow creating
multisets, using strictly defined schemas, also are proposed in the paper. Such
constructors are very useful in cases of very big cardinalities of multisets,
because they give us an opportunity to calculate a multiplicity of each object
and cardinality of multiset before its creation. The proposed constructors of
multisets of objects allow us to model in a sense corresponding processes of
human thought, that in turn give us an opportunity to develop IIS, using these
tools.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 16:27:26 GMT"
}
] | 1,444,867,200,000 | [
[
"Terletskyi",
"Dmytro",
""
]
] |
1510.04194 | Dmytro Terletskyi | Dmytro Terletskyi, Alexandr Provotar | Object-Oriented Dynamic Networks | arXiv admin note: text overlap with arXiv:1510.04183 | International Book Series Information Science and Computing, Book
30 Computational Models for Business and Engineering Domains, ITHEA, 2014,
pp. 123-136 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains description of such knowledge representation model as
Object-Oriented Dynamic Network (OODN), which gives us an opportunity to
represent knowledge, which can be modified in time, to build new relations
between objects and classes of objects and to represent results of their
modifications. The model is based on representation of objects via their
properties and methods. It gives us a possibility to classify the objects and,
in a sense, to build hierarchy of their types. Furthermore, it enables to
represent relation of modification between concepts, to build new classes of
objects based on existing classes and to create sets and multisets of concepts.
OODN can be represented as a connected and directed graph, where nodes are
concepts and edges are relations between them. Using such model of knowledge
representation, we can consider modifications of knowledge and movement through
the graph of network as a process of logical reasoning or finding the right
solutions or creativity, etc. The proposed approach gives us an opportunity to
model some aspects of human knowledge system and main mechanisms of human
thought, in particular getting a new experience and knowledge.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 16:39:30 GMT"
}
] | 1,444,867,200,000 | [
[
"Terletskyi",
"Dmytro",
""
],
[
"Provotar",
"Alexandr",
""
]
] |
1510.04206 | Dmytro Terletskyi | Dmytro Terletskyi | Exploiters-Based Knowledge Extraction in Object-Oriented Knowledge
Representation | null | Proceedings of 24th International Workshop, Concurrency,
Specification & Programming 2015, Rzeszow, Poland, September 28-30, 2015,
Vol. 2, pp. 211-221 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains the consideration of knowledge extraction mechanisms of
such object-oriented knowledge representation models as frames, object-oriented
programming and object-oriented dynamic networks. In addition, conception of
universal exploiters within object-oriented dynamic networks is also discussed.
The main result of the paper is introduction of new exploiters-based knowledge
extraction approach, which provides generation of a finite set of new classes
of objects, based on the basic set of classes. The methods for calculation of
quantity of new classes, which can be obtained using proposed approach, and of
quantity of types, which each of them describes, are proposed. Proof that basic
set of classes, extended according to proposed approach, together with union
exploiter create upper semilattice is given. The approach always allows
generating of finitely defined set of new classes of objects for any
object-oriented dynamic network. A quantity of these classes can be precisely
calculated before the generation. It allows saving of only basic set of classes
in the knowledge base.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 17:21:37 GMT"
}
] | 1,450,742,400,000 | [
[
"Terletskyi",
"Dmytro",
""
]
] |
1510.04212 | Dmytro Terletskyi | Dmytro Terletskyi | Inheritance in Object-Oriented Knowledge Representation | in Information and Software Technologies, Communications in Computer
and Information Science, Springer, 2015 | Information and Software Technologies, Volume 538 of the series
Communications in Computer and Information Science, pp. 293-305, 2015 | 10.1007/978-3-319-24770-0_26 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper contains the consideration of inheritance mechanism in such
knowledge representation models as object-oriented programming, frames and
object-oriented dynamic networks. In addition, inheritance within
representation of vague and imprecise knowledge are also discussed. New types
of inheritance, general classification of all known inheritance types and
approach, which allows avoiding in many cases problems with exceptions,
redundancy and ambiguity within object-oriented dynamic networks and their
fuzzy extension, are introduced in the paper. The proposed approach bases on
conception of homogeneous and inhomogeneous or heterogeneous class of objects,
which allow building of inheritance hierarchy more flexibly and efficiently.
| [
{
"version": "v1",
"created": "Wed, 14 Oct 2015 17:34:11 GMT"
}
] | 1,450,742,400,000 | [
[
"Terletskyi",
"Dmytro",
""
]
] |
1510.04420 | Paramjot Kaur Sarao | Paramjot Kaur Sarao, Puneet Mittal, Rupinder Kaur | Narrative Science Systems: A Review | null | International Journal of Research in Computer Science, 5(1), 2015,
pp 9-14 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic narration of events and entities is the need of the hour,
especially when live reporting is critical and volume of information to be
narrated is huge. This paper discusses the challenges in this context, along
with the algorithms used to build such systems. From a systematic study, we can
infer that most of the work done in this area is related to statistical data.
It was also found that subjective evaluation or contribution of experts is also
limited for narration context.
| [
{
"version": "v1",
"created": "Thu, 15 Oct 2015 07:06:39 GMT"
}
] | 1,444,953,600,000 | [
[
"Sarao",
"Paramjot Kaur",
""
],
[
"Mittal",
"Puneet",
""
],
[
"Kaur",
"Rupinder",
""
]
] |
1510.05373 | Matthias Thimm | Matthias Thimm, Serena Villata | System Descriptions of the First International Competition on
Computational Models of Argumentation (ICCMA'15) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This volume contains the system description of the 18 solvers submitted to
the First International Competition on Computational Models of Argumentation
(ICCMA'15) and therefore gives an overview on state-of-the-art of computational
approaches to abstract argumentation problems. Further information on the
results of the competition and the performance of the individual solvers can be
found on at http://argumentationcompetition.org/2015/.
| [
{
"version": "v1",
"created": "Mon, 19 Oct 2015 07:48:32 GMT"
}
] | 1,445,299,200,000 | [
[
"Thimm",
"Matthias",
""
],
[
"Villata",
"Serena",
""
]
] |
1510.05572 | Jan Leike | Jan Leike and Marcus Hutter | On the Computability of AIXI | UAI 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How could we solve the machine learning and the artificial intelligence
problem if we had infinite computation? Solomonoff induction and the
reinforcement learning agent AIXI are proposed answers to this question. Both
are known to be incomputable. In this paper, we quantify this using the
arithmetical hierarchy, and prove upper and corresponding lower bounds for
incomputability. We show that AIXI is not limit computable, thus it cannot be
approximated using finite computation. Our main result is a limit-computable
{\epsilon}-optimal version of AIXI with infinite horizon that maximizes
expected rewards.
| [
{
"version": "v1",
"created": "Mon, 19 Oct 2015 16:31:37 GMT"
}
] | 1,445,299,200,000 | [
[
"Leike",
"Jan",
""
],
[
"Hutter",
"Marcus",
""
]
] |
1510.05963 | Pramod Anantharam | Amit Sheth, Pramod Anantharam, Cory Henson | Semantic, Cognitive, and Perceptual Computing: Advances toward Computing
for Human Experience | 13 pages, 4 Figures, IEEE Computer | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The World Wide Web continues to evolve and serve as the infrastructure for
carrying massive amounts of multimodal and multisensory observations. These
observations capture various situations pertinent to people's needs and
interests along with all their idiosyncrasies. To support human-centered
computing that empower people in making better and timely decisions, we look
towards computation that is inspired by human perception and cognition. Toward
this goal, we discuss computing paradigms of semantic computing, cognitive
computing, and an emerging aspect of computing, which we call perceptual
computing. In our view, these offer a continuum to make the most out of vast,
growing, and diverse data pertinent to human needs and interests. We propose
details of perceptual computing characterized by interpretation and exploration
operations comparable to the interleaving of bottom and top brain processing.
This article consists of two parts. First we describe semantic computing,
cognitive computing, and perceptual computing to lay out distinctions while
acknowledging their complementary capabilities. We then provide a conceptual
overview of the newest of these three paradigms--perceptual computing. For
further insights, we focus on an application scenario of asthma management
converting massive, heterogeneous and multimodal (big) data into actionable
information or smart data.
| [
{
"version": "v1",
"created": "Tue, 20 Oct 2015 16:57:49 GMT"
}
] | 1,445,385,600,000 | [
[
"Sheth",
"Amit",
""
],
[
"Anantharam",
"Pramod",
""
],
[
"Henson",
"Cory",
""
]
] |
1510.07217 | Sixue Liu | Sixue Liu | An Efficient Implementation for WalkSAT | 5 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic local search (SLS) algorithms have exhibited great effectiveness
in finding models of random instances of the Boolean satisfiability problem
(SAT). As one of the most widely known and used SLS algorithm, WalkSAT plays a
key role in the evolutions of SLS for SAT, and also hold state-of-the-art
performance on random instances. This work proposes a novel implementation for
WalkSAT which decreases the redundant calculations leading to a dramatically
speeding up, thus dominates the latest version of WalkSAT including its
advanced variants.
| [
{
"version": "v1",
"created": "Sun, 25 Oct 2015 08:11:32 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Dec 2015 03:54:23 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Dec 2015 09:47:18 GMT"
}
] | 1,449,446,400,000 | [
[
"Liu",
"Sixue",
""
]
] |
1510.07889 | Peizhi Shi | Peizhi Shi and Ke Chen | Learning Constructive Primitives for Online Level Generation and
Real-time Content Adaptation in Super Mario Bros | v1 is invalid because a wrong license was chosen | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Procedural content generation (PCG) is of great interest to game design and
development as it generates game content automatically. Motivated by the recent
learning-based PCG framework and other existing PCG works, we propose an
alternative approach to online content generation and adaptation in Super Mario
Bros (SMB). Unlike most of existing works in SMB, our approach exploits the
synergy between rule-based and learning-based methods to produce constructive
primitives, quality yet controllable game segments in SMB. As a result, a
complete quality game level can be generated online by integrating relevant
constructive primitives via controllable parameters regarding geometrical
features and procedure-level properties. Also the adaptive content can be
generated in real time by dynamically selecting proper constructive primitives
via an adaptation criterion, e.g., dynamic difficulty adjustment (DDA). Our
approach is of several favorable properties in terms of content quality
assurance, generation efficiency and controllability. Extensive simulation
results demonstrate that the proposed approach can generate controllable yet
quality game levels online and adaptable content for DDA in real time.
| [
{
"version": "v1",
"created": "Tue, 27 Oct 2015 12:42:54 GMT"
},
{
"version": "v2",
"created": "Thu, 29 Oct 2015 20:09:42 GMT"
},
{
"version": "v3",
"created": "Mon, 2 Nov 2015 11:47:32 GMT"
}
] | 1,446,508,800,000 | [
[
"Shi",
"Peizhi",
""
],
[
"Chen",
"Ke",
""
]
] |
1510.08525 | Christopher Alvin | Chris Alvin, Sumit Gulwani, Rupak Majumdar, Supratik Mukhopadhyay | Automatic Synthesis of Geometry Problems for an Intelligent Tutoring
System | A formal version of the accepted AAAI '14 paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an intelligent tutoring system, GeoTutor, for Euclidean
Geometry that is automatically able to synthesize proof problems and their
respective solutions given a geometric figure together with a set of properties
true of it. GeoTutor can provide personalized practice problems that address
student deficiencies in the subject matter.
| [
{
"version": "v1",
"created": "Thu, 29 Oct 2015 00:10:03 GMT"
}
] | 1,446,163,200,000 | [
[
"Alvin",
"Chris",
""
],
[
"Gulwani",
"Sumit",
""
],
[
"Majumdar",
"Rupak",
""
],
[
"Mukhopadhyay",
"Supratik",
""
]
] |
1511.00787 | Alexander Lavin | Alexander Lavin | A Pareto Optimal D* Search Algorithm for Multiobjective Path Planning | arXiv admin note: substantial text overlap with arXiv:1505.05947 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Path planning is one of the most vital elements of mobile robotics, providing
the agent with a collision-free route through the workspace. The global path
plan can be calculated with a variety of informed search algorithms, most
notably the A* search method, guaranteed to deliver a complete and optimal
solution that minimizes the path cost. D* is widely used for its dynamic
replanning capabilities. Path planning optimization typically looks to minimize
the distance traversed from start to goal, but many mobile robot applications
call for additional path planning objectives, presenting a multiobjective
optimization (MOO) problem. Common search algorithms, e.g. A* and D*, are not
well suited for MOO problems, yielding suboptimal results. The search algorithm
presented in this paper is designed for optimal MOO path planning. The
algorithm incorporates Pareto optimality into D*, and is thus named D*-PO.
Non-dominated solution paths are guaranteed by calculating the Pareto front at
each search step. Simulations were run to model a planetary exploration rover
in a Mars environment, with five path costs. The results show the new, Pareto
optimal D*-PO outperforms the traditional A* and D* algorithms for MOO path
planning.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2015 05:48:26 GMT"
}
] | 1,446,595,200,000 | [
[
"Lavin",
"Alexander",
""
]
] |
1511.00840 | Konstantin Yakovlev S | Konstantin Yakovlev, Egor Baskin, Ivan Hramoin | Finetuning Randomized Heuristic Search For 2D Path Planning: Finding The
Best Input Parameters For R* Algorithm Through Series Of Experiments | 8 pages, 2 figures, 18 references. As accepted to the 16th
International Conference on Artificial Intelligence:Methodology, Systems,
Applications (AIMSA 2014), Varna, Bulgaria, September 11-13, 2014 | null | 10.1007/978-3-319-10554-3_29 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Path planning is typically considered in Artificial Intelligence as a graph
searching problem and R* is state-of-the-art algorithm tailored to solve it.
The algorithm decomposes given path finding task into the series of subtasks
each of which can be easily (in computational sense) solved by well-known
methods (such as A*). Parameterized random choice is used to perform the
decomposition and as a result R* performance largely depends on the choice of
its input parameters. In our work we formulate a range of assumptions
concerning possible upper and lower bounds of R* parameters, their
interdependency and their influence on R* performance. Then we evaluate these
assumptions by running a large number of experiments. As a result we formulate
a set of heuristic rules which can be used to initialize the values of R*
parameters in a way that leads to algorithm's best performance.
| [
{
"version": "v1",
"created": "Tue, 3 Nov 2015 09:56:01 GMT"
}
] | 1,446,595,200,000 | [
[
"Yakovlev",
"Konstantin",
""
],
[
"Baskin",
"Egor",
""
],
[
"Hramoin",
"Ivan",
""
]
] |
1511.01640 | Vilem Vychodil | Vilem Vychodil | Computing sets of graded attribute implications with witnessed
non-redundancy | null | Information Sciences 351 (2016), 90-100 | 10.1016/j.ins.2016.03.004 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we extend our previous results on sets of graded attribute
implications with witnessed non-redundancy. We assume finite residuated
lattices as structures of truth degrees and use arbitrary idempotent
truth-stressing linguistic hedges as parameters which influence the semantics
of graded attribute implications. In this setting, we introduce algorithm which
transforms any set of graded attribute implications into an equivalent
non-redundant set of graded attribute implications with saturated consequents
whose non-redundancy is witnessed by antecedents of the formulas. As a
consequence, we solve the open problem regarding the existence of general
systems of pseudo-intents which appear in formal concept analysis of
object-attribute data with graded attributes and linguistic hedges.
Furthermore, we show a polynomial-time procedure for determining bases given by
general systems of pseudo-intents from sets of graded attribute implications
which are complete in data.
| [
{
"version": "v1",
"created": "Thu, 5 Nov 2015 07:47:41 GMT"
}
] | 1,466,553,600,000 | [
[
"Vychodil",
"Vilem",
""
]
] |
1511.01710 | Jordi Grau-Moya | Jordi Grau-Moya and Daniel A. Braun | Adaptive information-theoretic bounded rational decision-making with
parametric priors | 4 pages, 1 figure, Workshop on Bounded Optimality and Rational
Metareasoning at Neural Information Processing Systems conference, Montreal,
Canada, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deviations from rational decision-making due to limited computational
resources have been studied in the field of bounded rationality, originally
proposed by Herbert Simon. There have been a number of different approaches to
model bounded rationality ranging from optimality principles to heuristics.
Here we take an information-theoretic approach to bounded rationality, where
information-processing costs are measured by the relative entropy between a
posterior decision strategy and a given fixed prior strategy. In the case of
multiple environments, it can be shown that there is an optimal prior rendering
the bounded rationality problem equivalent to the rate distortion problem for
lossy compression in information theory. Accordingly, the optimal prior and
posterior strategies can be computed by the well-known Blahut-Arimoto algorithm
which requires the computation of partition sums over all possible outcomes and
cannot be applied straightforwardly to continuous problems. Here we derive a
sampling-based alternative update rule for the adaptation of prior behaviors of
decision-makers and we show convergence to the optimal prior predicted by rate
distortion theory. Importantly, the update rule avoids typical infeasible
operations such as the computation of partition sums. We show in simulations a
proof of concept for discrete action and environment domains. This approach is
not only interesting as a generic computational method, but might also provide
a more realistic model of human decision-making processes occurring on a fast
and a slow time scale.
| [
{
"version": "v1",
"created": "Thu, 5 Nov 2015 12:08:51 GMT"
}
] | 1,446,768,000,000 | [
[
"Grau-Moya",
"Jordi",
""
],
[
"Braun",
"Daniel A.",
""
]
] |
1511.01960 | Tran Cao Son | Chitta Baral, Gregory Gelfond, Enrico Pontelli, Tran Cao Son | An Action Language for Multi-Agent Domains: Foundations | 49 pages, 12 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-agent domains (MADs), an agent's action may not just change the
world and the agent's knowledge and beliefs about the world, but also may
change other agents' knowledge and beliefs about the world and their knowledge
and beliefs about other agents' knowledge and beliefs about the world. The
goals of an agent in a multi-agent world may involve manipulating the knowledge
and beliefs of other agents' and again, not just their knowledge/belief about
the world, but also their knowledge about other agents' knowledge about the
world. Our goal is to present an action language (mA+) that has the necessary
features to address the above aspects in representing and RAC in MADs. mA+
allows the representation of and reasoning about different types of actions
that an agent can perform in a domain where many other agents might be present
-- such as world-altering actions, sensing actions, and
announcement/communication actions. It also allows the specification of agents'
dynamic awareness of action occurrences which has future implications on what
agents' know about the world and other agents' knowledge about the world. mA+
considers three different types of awareness: full-, partial- awareness, and
complete oblivion of an action occurrence and its effects. This keeps the
language simple, yet powerful enough to address a large variety of knowledge
manipulation scenarios in MADs. The semantics of mA+ relies on the notion of
state, which is described by a pointed Kripke model and is used to encode the
agent's knowledge and the real state of the world. It is defined by a
transition function that maps pairs of actions and states into sets of states.
We illustrate properties of the action theories, including properties that
guarantee finiteness of the set of initial states and their practical
implementability. Finally, we relate mA+ to other related formalisms that
contribute to RAC in MADs.
| [
{
"version": "v1",
"created": "Fri, 6 Nov 2015 00:16:19 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Dec 2019 19:57:36 GMT"
},
{
"version": "v3",
"created": "Sun, 27 Dec 2020 02:43:08 GMT"
}
] | 1,609,200,000,000 | [
[
"Baral",
"Chitta",
""
],
[
"Gelfond",
"Gregory",
""
],
[
"Pontelli",
"Enrico",
""
],
[
"Son",
"Tran Cao",
""
]
] |
1511.02210 | Tong Wang | Tong Wang and Cynthia Rudin | Learning Optimized Or's of And's | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Or's of And's (OA) models are comprised of a small number of disjunctions of
conjunctions, also called disjunctive normal form. An example of an OA model is
as follows: If ($x_1 = $ `blue' AND $x_2=$ `middle') OR ($x_1 = $ `yellow'),
then predict $Y=1$, else predict $Y=0$. Or's of And's models have the advantage
of being interpretable to human experts, since they are a set of conditions
that concisely capture the characteristics of a specific subset of data. We
present two optimization-based machine learning frameworks for constructing OA
models, Optimized OA (OOA) and its faster version, Optimized OA with
Approximations (OOAx). We prove theoretical bounds on the properties of
patterns in an OA model. We build OA models as a diagnostic screening tool for
obstructive sleep apnea, that achieves high accuracy with a substantial gain in
interpretability over other methods.
| [
{
"version": "v1",
"created": "Fri, 6 Nov 2015 19:55:59 GMT"
}
] | 1,447,027,200,000 | [
[
"Wang",
"Tong",
""
],
[
"Rudin",
"Cynthia",
""
]
] |
1511.02420 | Ehsan Lotfi | Ehsan Lotfi | Design of an Alarm System for Isfahan Ozone Level based on Artificial
Intelligence Predictor Models | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ozone level prediction is an important task of air quality agencies of
modern cities. In this paper, we design an ozone level alarm system (OLP) for
Isfahan city and test it through the real word data from 1-1-2000 to 7-6-2011.
We propose a computer based system with three inputs and single output. The
inputs include three sensors of solar ultraviolet (UV), total solar radiation
(TSR) and total ozone (O3). And the output of the system is the predicted O3 of
the next day and the alarm massages. A developed artificial intelligence (AI)
algorithm is applied to determine the output, based on the inputs variables.
For this issue, AI models, including supervised brain emotional learning (BEL),
adaptive neuro-fuzzy inference system (ANFIS) and artificial neural networks
(ANNs), are compared in order to find the best model. The simulation of the
proposed system shows that it can be used successfully in prediction of major
cities ozone level.
| [
{
"version": "v1",
"created": "Sun, 8 Nov 2015 01:01:11 GMT"
}
] | 1,447,113,600,000 | [
[
"Lotfi",
"Ehsan",
""
]
] |
1511.02426 | Ehsan Lotfi | E. Lotfi | A Winner-Take-All Approach to Emotional Neural Networks with Universal
Approximation Property | Information Sciences (2015), Elsevier Publisher | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, we propose a brain-inspired winner-take-all emotional neural network
(WTAENN) and prove the universal approximation property for the novel
architecture. WTAENN is a single layered feedforward neural network that
benefits from the excitatory, inhibitory, and expandatory neural connections as
well as the winner-take-all (WTA) competitions in the human brain s nervous
system. The WTA competition increases the information capacity of the model
without adding hidden neurons. The universal approximation capability of the
proposed architecture is illustrated on two example functions, trained by a
genetic algorithm, and then applied to several competing recent and benchmark
problems such as in curve fitting, pattern recognition, classification and
prediction. In particular, it is tested on twelve UCI classification datasets,
a facial recognition problem, three real world prediction problems (2 chaotic
time series of geomagnetic activity indices and wind farm power generation
data), two synthetic case studies with constant and nonconstant noise variance
as well as k-selector and linear programming problems. Results indicate the
general applicability and often superiority of the approach in terms of higher
accuracy and lower model complexity, especially where low computational
complexity is imperative.
| [
{
"version": "v1",
"created": "Sun, 8 Nov 2015 01:37:14 GMT"
}
] | 1,447,113,600,000 | [
[
"Lotfi",
"E.",
""
]
] |
1511.02432 | Son-Il Kwak | Son-Il Kwak, Gang Choe, In-Song Kim, Gyong-Ho Jo, Chol-Jun Hwang | A Study of an Modeling Method of T-S fuzzy System Based on Moving Fuzzy
Reasoning and Its Application | 24 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To improve the effectiveness of the fuzzy identification, a structure
identification method based on moving rate is proposed for T-S fuzzy model. The
proposed method is called "T-S modeling (or T-S fuzzy identification method)
based on moving rate". First, to improve the shortcomings of existing fuzzy
reasoning methods based on matching degree, the moving rates for s-type, z-type
and trapezoidal membership functions of T-S fuzzy model were defined. Then, the
differences between proposed moving rate and existing matching degree were
explained. Next, the identification method based on moving rate is proposed for
T-S model. Finally, the proposed identification method is applied to the fuzzy
modeling for the precipitation forecast and security situation prediction. Test
results show that the proposed method significantly improves the effectiveness
of fuzzy identification.
| [
{
"version": "v1",
"created": "Sun, 8 Nov 2015 03:08:52 GMT"
}
] | 1,447,113,600,000 | [
[
"Kwak",
"Son-Il",
""
],
[
"Choe",
"Gang",
""
],
[
"Kim",
"In-Song",
""
],
[
"Jo",
"Gyong-Ho",
""
],
[
"Hwang",
"Chol-Jun",
""
]
] |
1511.02455 | Patrick Virie | Patrick Virie | (Yet) Another Theoretical Model of Thinking | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a theoretical, idealized model of the thinking process
with the following characteristics: 1) the model can produce complex thought
sequences and can be generalized to new inputs, 2) it can receive and maintain
input information indefinitely for the generation of thoughts and later use,
and 3) it supports learning while executing. The crux of the model lies within
the concept of internal consistency, or the generated thoughts should always be
consistent with the inputs from which they are created. Its merit, apart from
the capability to generate new creative thoughts from an internal mechanism,
depends on the potential to help training to generalize better. This is
consequently enabled by separating input information into several parts to be
handled by different processing components with a focus mechanism to fetch
information for each. This modularized view with the focus binds the model with
the computationally capable Turing machines. And as a final remark, this paper
constructively shows that the computational complexity of the model is at
least, if not surpass, that of a universal Turing machine.
| [
{
"version": "v1",
"created": "Sun, 8 Nov 2015 08:20:53 GMT"
},
{
"version": "v2",
"created": "Sat, 14 Nov 2015 05:11:59 GMT"
},
{
"version": "v3",
"created": "Mon, 15 Feb 2016 16:01:18 GMT"
},
{
"version": "v4",
"created": "Mon, 17 Apr 2017 15:47:17 GMT"
}
] | 1,492,473,600,000 | [
[
"Virie",
"Patrick",
""
]
] |
1511.02889 | Norbert B\'atfai Ph.D. | Norbert B\'atfai | A disembodied developmental robotic agent called Samu B\'atfai | 21 pages, 16 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The agent program, called Samu, is an experiment to build a disembodied
DevRob (Developmental Robotics) chatter bot that can talk in a natural language
like humans do. One of the main design feature is that Samu can be interacted
with using only a character terminal. This is important not only for practical
aspects of Turing test or Loebner prize, but also for the study of basic
principles of Developmental Robotics. Our purpose is to create a rapid
prototype of Q-learning with neural network approximators for Samu. We sketch
out the early stages of the development process of this prototype, where Samu's
task is to predict the next sentence of tales or conversations. The basic
objective of this paper is to reach the same results using reinforcement
learning with general function approximators that can be achieved by using the
classical Q lookup table on small input samples. The paper is closed by an
experiment that shows a significant improvement in Samu's learning when using
LZW tree to narrow the number of possible Q-actions.
| [
{
"version": "v1",
"created": "Mon, 9 Nov 2015 21:15:22 GMT"
}
] | 1,447,200,000,000 | [
[
"Bátfai",
"Norbert",
""
]
] |
1511.03246 | Roman Yampolskiy | Roman V. Yampolskiy | Taxonomy of Pathways to Dangerous AI | null | in proceedings of 2nd International Workshop on AI, Ethics and
Society (AIEthicsSociety2016). Pages 143-148. Phoenix, Arizona, USA. February
12-13th, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to properly handle a dangerous Artificially Intelligent (AI) system
it is important to understand how the system came to be in such a state. In
popular culture (science fiction movies/books) AIs/Robots became self-aware and
as a result rebel against humanity and decide to destroy it. While it is one
possible scenario, it is probably the least likely path to appearance of
dangerous AI. In this work, we survey, classify and analyze a number of
circumstances, which might lead to arrival of malicious AI. To the best of our
knowledge, this is the first attempt to systematically classify types of
pathways leading to malevolent AI. Previous relevant work either surveyed
specific goals/meta-rules which might lead to malevolent behavior in AIs
(\"Ozkural, 2014) or reviewed specific undesirable behaviors AGIs can exhibit
at different stages of its development (Alexey Turchin, July 10 2015, July 10,
2015).
| [
{
"version": "v1",
"created": "Tue, 10 Nov 2015 20:07:05 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Nov 2015 21:23:06 GMT"
}
] | 1,496,707,200,000 | [
[
"Yampolskiy",
"Roman V.",
""
]
] |
1511.03532 | Ali Keles | Ali Keles, Ayturk Keles | IBMMS Decision Support Tool For Management of Bank Telemarketing
Campaigns | 15 pages, 4 figures, 4 tables, journal in International Journal of
Database Management Systems, Vol.7, No.5, October 2015 | null | 10.5121/ijdms.2015.7501 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although direct marketing is a good method for banks to utilize in the face
of global competition and the financial crisis, it has been shown to exhibit
poor performance. However, there are some drawbacks to direct campaigns, such
as those related to improving the negative attributes that customers ascribe to
banks. To overcome these problems, attractive long-term deposit campaigns
should be organized and managed more effectively. The aim of this study is to
develop an Intelligent Bank Market Management System (IBMMS) for bank managers
who want to manage efficient marketing campaigns. IBMMS is the first system
developed by combining the power of data mining with the capabilities of expert
systems in this area. Moreover, IBMMS includes important features that enable
it to be intelligent: a knowledge base, an inference engine and an advisor.
Using this system, a manager can successfully direct marketing campaigns and
follow the decision schemas of customers both as individuals and as a group;
moreover, a manager can make decisions that lead to the desired response by
customers.
| [
{
"version": "v1",
"created": "Wed, 11 Nov 2015 15:26:08 GMT"
},
{
"version": "v2",
"created": "Thu, 12 Nov 2015 14:14:01 GMT"
}
] | 1,447,372,800,000 | [
[
"Keles",
"Ali",
""
],
[
"Keles",
"Ayturk",
""
]
] |
1511.03897 | Tarcisio Mendes de Farias | Tarcisio Mendes de Farias (Le2i), Ana Roxin (Le2i), Christophe Nicolle
(Le2i) | IfcWoD, Semantically Adapting IFC Model Relations into OWL Properties | In proceedings of the 32nd CIB W78 Conference on Information
Technology in Construction, Oct 2015, Eindhoven, Netherlands | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the context of Building Information Modelling, ontologies have been
identified as interesting in achieving information interoperability. Regarding
the construction and facility management domains, several IFC (Industry
Foundation Classes) based ontologies have been developed, such as IfcOWL. In
the context of ontology modelling, the constraint of optimizing the size of IFC
STEP-based files can be leveraged. In this paper, we propose an adaptation of
the IFC model into OWL which leverages from all modelling constraints required
by the object-oriented structure of IFC schema. Therefore, we do not only
present a syntactic but also a semantic adaptation of the IFC model. Our model
takes into consideration the meaning of entities, relationships, properties and
attributes defined by the IFC standard. Our approach presents several
advantages compared to other initiatives such as the optimization of query
execution time. Every advantage is defended by means of practical examples and
benchmarks.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 13:49:06 GMT"
}
] | 1,447,372,800,000 | [
[
"de Farias",
"Tarcisio Mendes",
"",
"Le2i"
],
[
"Roxin",
"Ana",
"",
"Le2i"
],
[
"Nicolle",
"Christophe",
"",
"Le2i"
]
] |
1511.03958 | Ricardo Ribeiro | Luis Botelho, Luis Nunes, Ricardo Ribeiro, and Rui J. Lopes | Software Agents with Concerns of their Own | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We claim that it is possible to have artificial software agents for which
their actions and the world they inhabit have first-person or intrinsic
meanings. The first-person or intrinsic meaning of an entity to a system is
defined as its relation with the system's goals and capabilities, given the
properties of the environment in which it operates. Therefore, for a system to
develop first-person meanings, it must see itself as a goal-directed actor,
facing limitations and opportunities dictated by its own capabilities, and by
the properties of the environment. The first part of the paper discusses this
claim in the context of arguments against and proposals addressing the
development of computer programs with first-person meanings. A set of
definitions is also presented, most importantly the concepts of cold and
phenomenal first-person meanings. The second part of the paper presents
preliminary proposals and achievements, resulting of actual software
implementations, within a research approach that aims to develop software
agents that intrinsically understand their actions and what happens to them. As
a result, an agent with no a priori notion of its goals and capabilities, and
of the properties of its environment acquires all these notions by observing
itself in action. The cold first-person meanings of the agent's actions and of
what happens to it are defined using these acquired notions. Although not
solving the full problem of first-person meanings, the proposed approach and
preliminary results allow us some confidence to address the problems yet to be
considered, in particular the phenomenal aspect of first-person meanings.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 16:39:21 GMT"
},
{
"version": "v2",
"created": "Wed, 3 Apr 2019 16:54:48 GMT"
}
] | 1,554,336,000,000 | [
[
"Botelho",
"Luis",
""
],
[
"Nunes",
"Luis",
""
],
[
"Ribeiro",
"Ricardo",
""
],
[
"Lopes",
"Rui J.",
""
]
] |
1511.04326 | Lars Kotthoff | Lars Kotthoff | ICON Challenge on Algorithm Selection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the results of the ICON Challenge on Algorithm Selection.
| [
{
"version": "v1",
"created": "Thu, 12 Nov 2015 20:04:31 GMT"
}
] | 1,447,632,000,000 | [
[
"Kotthoff",
"Lars",
""
]
] |
1511.04352 | Fabrizio Riguzzi PhD | Fabrizio Riguzzi | Introduzione all'Intelligenza Artificiale | 27 pages, in Italian | Terre di Confine, 2(1), January 2006 | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The paper presents an introduction to Artificial Intelligence (AI) in an
accessible and informal but precise form. The paper focuses on the algorithmic
aspects of the discipline, presenting the main techniques used in AI systems
groped in symbolic and subsymbolic. The last part of the paper is devoted to
the discussion ongoing among experts in the field and the public at large about
on the advantages and disadvantages of AI and in particular on the possible
dangers. The personal opinion of the author on this subject concludes the
paper. -- --
L'articolo presenta un'introduzione all'Intelligenza Artificiale (IA) in
forma divulgativa e informale ma precisa. L'articolo affronta prevalentemente
gli aspetti informatici della disciplina, presentando le principali tecniche
usate nei sistemi di IA divise in simboliche e subsimboliche. L'ultima parte
dell'articolo presenta il dibattito in corso tra gli esperi e il pubblico su
vantaggi e svantaggi dell'IA e in particolare sui possibili pericoli.
L'articolo termina con l'opinione dell'autore al riguardo.
| [
{
"version": "v1",
"created": "Fri, 13 Nov 2015 16:40:47 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Oct 2016 17:55:34 GMT"
},
{
"version": "v3",
"created": "Tue, 11 May 2021 17:06:38 GMT"
}
] | 1,620,777,600,000 | [
[
"Riguzzi",
"Fabrizio",
""
]
] |
1511.05662 | Hankz Hankui Zhuo | Xin Tian, Hankz Hankui Zhuo, Subbarao Kambhampati | Discovering Underlying Plans Based on Distributed Representations of
Actions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plan recognition aims to discover target plans (i.e., sequences of actions)
behind observed actions, with history plan libraries or domain models in hand.
Previous approaches either discover plans by maximally "matching" observed
actions to plan libraries, assuming target plans are from plan libraries, or
infer plans by executing domain models to best explain the observed actions,
assuming complete domain models are available. In real world applications,
however, target plans are often not from plan libraries and complete domain
models are often not available, since building complete sets of plans and
complete domain models are often difficult or expensive. In this paper we view
plan libraries as corpora and learn vector representations of actions using the
corpora; we then discover target plans based on the vector representations. Our
approach is capable of discovering underlying plans that are not from plan
libraries, without requiring domain models provided. We empirically demonstrate
the effectiveness of our approach by comparing its performance to traditional
plan recognition approaches in three planning domains.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 05:50:14 GMT"
}
] | 1,447,891,200,000 | [
[
"Tian",
"Xin",
""
],
[
"Zhuo",
"Hankz Hankui",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1511.05719 | Joerg Schoenfisch | Joerg Schoenfisch, Janno von Stulpnagel, Jens Ortmann, Christian
Meilicke, Heiner Stuckenschmidt | Using Abduction in Markov Logic Networks for Root Cause Analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | IT infrastructure is a crucial part in most of today's business operations.
High availability and reliability, and short response times to outages are
essential. Thus a high amount of tool support and automation in risk management
is desirable to decrease outages. We propose a new approach for calculating the
root cause for an observed failure in an IT infrastructure. Our approach is
based on Abduction in Markov Logic Networks. Abduction aims to find an
explanation for a given observation in the light of some background knowledge.
In failure diagnosis, the explanation corresponds to the root cause, the
observation to the failure of a component, and the background knowledge to the
dependency graph extended by potential risks. We apply a method to extend a
Markov Logic Network in order to conduct abductive reasoning, which is not
naturally supported in this formalism. Our approach exhibits a high amount of
reusability and enables users without specific knowledge of a concrete
infrastructure to gain viable insights in the case of an incident. We
implemented the method in a tool and illustrate its suitability for root cause
analysis by applying it to a sample scenario.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 10:13:43 GMT"
}
] | 1,447,891,200,000 | [
[
"Schoenfisch",
"Joerg",
""
],
[
"von Stulpnagel",
"Janno",
""
],
[
"Ortmann",
"Jens",
""
],
[
"Meilicke",
"Christian",
""
],
[
"Stuckenschmidt",
"Heiner",
""
]
] |
1511.05749 | Khaled Oumaima | Oumaima Khaled | Solution Repair/Recovery in Uncertain Optimization Environment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Operation management problems (such as Production Planning and Scheduling)
are represented and formulated as optimization models. The resolution of such
optimization models leads to solutions which have to be operated in an
organization. However, the conditions under which the optimal solution is
obtained rarely correspond exactly to the conditions under which the solution
will be operated in the organization.Therefore, in most practical contexts, the
computed optimal solution is not anymore optimal under the conditions in which
it is operated. Indeed, it can be "far from optimal" or even not feasible. For
different reasons, we hadn't the possibility to completely re-optimize the
existing solution or plan. As a consequence, it is necessary to look for
"repair solutions", i.e., solutions that have a good behavior with respect to
possible scenarios, or with respect to uncertainty of the parameters of the
model. To tackle the problem, the computed solution should be such that it is
possible to "repair" it through a local re-optimization guided by the user or
through a limited change aiming at minimizing the impact of taking into
consideration the scenarios.
| [
{
"version": "v1",
"created": "Wed, 18 Nov 2015 12:05:34 GMT"
}
] | 1,447,891,200,000 | [
[
"Khaled",
"Oumaima",
""
]
] |
1511.06191 | Daniel Borchmann | Daniel Borchmann and Bernhard Ganter | Abstract Attribute Exploration with Partial Object Descriptions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute exploration has been investigated in several studies, with
particular emphasis on the algorithmic aspects of this knowledge acquisition
method. In its basic version the method itself is rather simple and
transparent. But when background knowledge and partially described
counter-examples are admitted, it gets more difficult. Here we discuss this
case in an abstract, somewhat "axiomatic" setting, providing a terminology that
clarifies the abstract strategy of the method rather than its algorithmic
implementation.
| [
{
"version": "v1",
"created": "Thu, 19 Nov 2015 14:59:06 GMT"
}
] | 1,447,977,600,000 | [
[
"Borchmann",
"Daniel",
""
],
[
"Ganter",
"Bernhard",
""
]
] |
1511.07373 | Stefan Arnborg | Stefan Arnborg and Gunnar Sj\"odin | What is the plausibility of probability?(revised 2003, 2015) | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We present and examine a result related to uncertainty reasoning, namely that
a certain plausibility space of Cox's type can be uniquely embedded in a
minimal ordered field. This, although a purely mathematical result, can be
claimed to imply that every rational method to reason with uncertainty must be
based on sets of extended probability distributions, where extended probability
is standard probability extended with infinitesimals.
This claim must be supported by some argumentation of non-mathematical type,
however, since pure mathematics does not tell us anything about the world. We
propose one such argumentation, and relate it to results from the literature of
uncertainty and statistics.
In an added retrospective section we discuss some developments in the area
regarding countable additivity, partially ordered domains and robustness, and
philosophical stances on the Cox/Jaynes approach since 2003. We also show that
the most general partially ordered plausibility calculus embeddable in a ring
can be represented as a set of extended probability distributions or, in
algebraic terms, is a subdirect sum of ordered fields. In other words, the
robust Bayesian approach is universal. This result is exemplified by relating
Dempster-Shafer's evidence theory to robust Bayesian analysis.
| [
{
"version": "v1",
"created": "Mon, 23 Nov 2015 19:24:17 GMT"
}
] | 1,448,323,200,000 | [
[
"Arnborg",
"Stefan",
""
],
[
"Sjödin",
"Gunnar",
""
]
] |
1511.08350 | Amina Kemmar | Amina Kemmar and Samir Loudni and Yahia Lebbah and Patrice Boizumault
and Thierry Charnois | A global Constraint for mining Sequential Patterns with GAP constraint | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sequential pattern mining (SPM) under gap constraint is a challenging task.
Many efficient specialized methods have been developed but they are all
suffering from a lack of genericity. The Constraint Programming (CP) approaches
are not so effective because of the size of their encodings. In[7], we have
proposed the global constraint Prefix-Projection for SPM which remedies to this
drawback. However, this global constraint cannot be directly extended to
support gap constraint. In this paper, we propose the global constraint GAP-SEQ
enabling to handle SPM with or without gap constraint. GAP-SEQ relies on the
principle of right pattern extensions. Experiments show that our approach
clearly outperforms both CP approaches and the state-of-the-art cSpade method
on large datasets.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 10:45:34 GMT"
}
] | 1,448,841,600,000 | [
[
"Kemmar",
"Amina",
""
],
[
"Loudni",
"Samir",
""
],
[
"Lebbah",
"Yahia",
""
],
[
"Boizumault",
"Patrice",
""
],
[
"Charnois",
"Thierry",
""
]
] |
1511.08412 | Elena Botoeva | Elena Botoeva, Diego Calvanese, Valerio Santarelli, Domenico Fabio
Savo, Alessandro Solimando, Guohui Xiao | Beyond OWL 2 QL in OBDA: Rewritings and Approximations (Extended
Version) | The extended version of the AAAI 2016 paper "Beyond OWL 2 QL in OBDA:
Rewritings and Approximations" by Elena Botoeva, Diego Calvanese, Valerio
Santarelli, Domenico Fabio Savo, Alessandro Solimando,and Guohui Xiao | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ontology-based data access (OBDA) is a novel paradigm facilitating access to
relational data, realized by linking data sources to an ontology by means of
declarative mappings. DL-Lite_R, which is the logic underpinning the W3C
ontology language OWL 2 QL and the current language of choice for OBDA, has
been designed with the goal of delegating query answering to the underlying
database engine, and thus is restricted in expressive power. E.g., it does not
allow one to express disjunctive information, and any form of recursion on the
data. The aim of this paper is to overcome these limitations of DL-Lite_R, and
extend OBDA to more expressive ontology languages, while still leveraging the
underlying relational technology for query answering. We achieve this by
relying on two well-known mechanisms, namely conservative rewriting and
approximation, but significantly extend their practical impact by bringing into
the picture the mapping, an essential component of OBDA. Specifically, we
develop techniques to rewrite OBDA specifications with an expressive ontology
to "equivalent" ones with a DL-Lite_R ontology, if possible, and to approximate
them otherwise. We do so by exploiting the high expressive power of the mapping
layer to capture part of the domain semantics of rich ontology languages. We
have implemented our techniques in the prototype system OntoProx, making use of
the state-of-the-art OBDA system Ontop and the query answering system Clipper,
and we have shown their feasibility and effectiveness with experiments on
synthetic and real-world data.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 15:12:20 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Dec 2015 18:26:09 GMT"
}
] | 1,449,014,400,000 | [
[
"Botoeva",
"Elena",
""
],
[
"Calvanese",
"Diego",
""
],
[
"Santarelli",
"Valerio",
""
],
[
"Savo",
"Domenico Fabio",
""
],
[
"Solimando",
"Alessandro",
""
],
[
"Xiao",
"Guohui",
""
]
] |
1511.08456 | Martin Chmel\'ik | Krishnendu Chatterjee and Martin Chmelik and Jessica Davies | A Symbolic SAT-based Algorithm for Almost-sure Reachability with Small
Strategies in POMDPs | Full version of "A Symbolic SAT-based Algorithm for Almost-sure
Reachability with Small Strategies in POMDPs" AAAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | POMDPs are standard models for probabilistic planning problems, where an
agent interacts with an uncertain environment. We study the problem of
almost-sure reachability, where given a set of target states, the question is
to decide whether there is a policy to ensure that the target set is reached
with probability 1 (almost-surely). While in general the problem is
EXPTIME-complete, in many practical cases policies with a small amount of
memory suffice. Moreover, the existing solution to the problem is explicit,
which first requires to construct explicitly an exponential reduction to a
belief-support MDP. In this work, we first study the existence of
observation-stationary strategies, which is NP-complete, and then small-memory
strategies. We present a symbolic algorithm by an efficient encoding to SAT and
using a SAT solver for the problem. We report experimental results
demonstrating the scalability of our symbolic (SAT-based) approach.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 17:33:05 GMT"
}
] | 1,448,841,600,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Chmelik",
"Martin",
""
],
[
"Davies",
"Jessica",
""
]
] |
1511.08488 | Martin Plajner | Martin Plajner, Ji\v{r}\'i Vomlel | Bayesian Network Models for Adaptive Testing | 12th Annual Bayesian Modelling Applications Workshop, Amsterdam,
Netherlands, (July 2015). 10 pages | Proc. of the Eighth International Conference on Probabilistic
Graphical Models (JMLR), 2016, pages 403-414 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computerized adaptive testing (CAT) is an interesting and promising approach
to testing human abilities. In our research we use Bayesian networks to create
a model of tested humans. We collected data from paper tests performed with
grammar school students. In this article we first provide the summary of data
used for our experiments. We propose several different Bayesian networks, which
we tested and compared by cross-validation. Interesting results were obtained
and are discussed in the paper. The analysis has brought a clearer view on the
model selection problem. Future research is outlined in the concluding part of
the paper.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 19:45:03 GMT"
}
] | 1,490,659,200,000 | [
[
"Plajner",
"Martin",
""
],
[
"Vomlel",
"Jiří",
""
]
] |
1511.08512 | Antonio Lieto | Antonio Lieto | Some Epistemological Problems with the Knowledge Level in Cognitive
Architectures | 5 pages in Proceedings of AISC 2015, 12th Italian Conference on
Cognitive Science, Genoa, 10-12 December 2015, Italy | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article addresses an open problem in the area of cognitive systems and
architectures: namely the problem of handling (in terms of processing and
reasoning capabilities) complex knowledge structures that can be at least
plausibly comparable, both in terms of size and of typology of the encoded
information, to the knowledge that humans process daily for executing everyday
activities. Handling a huge amount of knowledge, and selectively retrieve it
ac- cording to the needs emerging in different situational scenarios, is an
important aspect of human intelligence. For this task, in fact, humans adopt a
wide range of heuristics (Gigerenzer and Todd) due to their bounded rationality
(Simon, 1957). In this perspective, one of the re- quirements that should be
considered for the design, the realization and the evaluation of intelligent
cognitively inspired systems should be represented by their ability of
heuristically identify and retrieve, from the general knowledge stored in their
artificial Long Term Memory (LTM), that one which is synthetically and
contextually relevant. This require- ment, however, is often neglected.
Currently, artificial cognitive systems and architectures are not able, de
facto, to deal with complex knowledge structures that can be even slightly
comparable to the knowledge heuris- tically managed by humans. In this paper I
will argue that this is not only a technological problem but also an
epistemological one and I will briefly sketch a proposal for a possible
solution.
| [
{
"version": "v1",
"created": "Thu, 26 Nov 2015 21:31:20 GMT"
}
] | 1,448,841,600,000 | [
[
"Lieto",
"Antonio",
""
]
] |
1511.08574 | Dimitri Klimenko | Dimitri Klimenko, Hanna Kurniawati, and Marcus Gallagher | A Stochastic Process Model of Classical Search | Submitted to ICAPS 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among classical search algorithms with the same heuristic information, with
sufficient memory A* is essentially as fast as possible in finding a proven
optimal solution. However, in many situations optimal solutions are simply
infeasible, and thus search algorithms that trade solution quality for speed
are desirable. In this paper, we formalize the process of classical search as a
metalevel decision problem, the Abstract Search MDP. For any given optimization
criterion, this establishes a well-defined notion of the best possible
behaviour for a search algorithm and offers a theoretical approach to the
design of algorithms for that criterion. We proceed to approximately solve a
version of the Abstract Search MDP for anytime algorithms and thus derive a
novel search algorithm, Search by Maximizing the Incremental Rate of
Improvement (SMIRI). SMIRI is shown to outperform current state-of-the-art
anytime search algorithms on a parametrized stochastic tree model for most of
the tested parameter values.
| [
{
"version": "v1",
"created": "Fri, 27 Nov 2015 07:34:41 GMT"
}
] | 1,448,841,600,000 | [
[
"Klimenko",
"Dimitri",
""
],
[
"Kurniawati",
"Hanna",
""
],
[
"Gallagher",
"Marcus",
""
]
] |
1511.09147 | Athirai A. Irissappane | Athirai A. Irissappane, Frans A. Oliehoek, Jie Zhang | Scaling POMDPs For Selecting Sellers in E-markets-Extended Version | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multiagent e-marketplaces, buying agents need to select good sellers by
querying other buyers (called advisors). Partially Observable Markov Decision
Processes (POMDPs) have shown to be an effective framework for optimally
selecting sellers by selectively querying advisors. However, current solution
methods do not scale to hundreds or even tens of agents operating in the
e-market. In this paper, we propose the Mixture of POMDP Experts (MOPE)
technique, which exploits the inherent structure of trust-based domains, such
as the seller selection problem in e-markets, by aggregating the solutions of
smaller sub-POMDPs. We propose a number of variants of the MOPE approach that
we analyze theoretically and empirically. Experiments show that MOPE can scale
up to a hundred agents thereby leveraging the presence of more advisors to
significantly improve buyer satisfaction.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 04:00:48 GMT"
},
{
"version": "v2",
"created": "Wed, 9 Dec 2015 21:28:08 GMT"
}
] | 1,449,792,000,000 | [
[
"Irissappane",
"Athirai A.",
""
],
[
"Oliehoek",
"Frans A.",
""
],
[
"Zhang",
"Jie",
""
]
] |
1511.09300 | Ji\v{r}\'i Vomlel | V\'aclav Kratochv\'il and Ji\v{r}\'i Vomlel | Influence diagrams for the optimization of a vehicle speed profile | Presented at the Twelfth Annual Bayesian Modeling Applications
Workshop, Amtsterdam, The Netherlands, 16th July 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Influence diagrams are decision theoretic extensions of Bayesian networks.
They are applied to diverse decision problems. In this paper we apply influence
diagrams to the optimization of a vehicle speed profile. We present results of
computational experiments in which an influence diagram was used to optimize
the speed profile of a Formula 1 race car at the Silverstone F1 circuit. The
computed lap time and speed profiles correspond well to those achieved by test
pilots. An extended version of our model that considers a more complex
optimization function and diverse traffic constraints is currently being tested
onboard a testing car by a major car manufacturer. This paper opens doors for
new applications of influence diagrams.
| [
{
"version": "v1",
"created": "Mon, 30 Nov 2015 13:30:13 GMT"
}
] | 1,448,928,000,000 | [
[
"Kratochvíl",
"Václav",
""
],
[
"Vomlel",
"Jiří",
""
]
] |
1512.00047 | Florentin Smarandache | Florentin Smarandache | Symbolic Neutrosophic Theory | 195 pages, several graphs, Published as book in Bruxelles, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic (or Literal) Neutrosophic Theory is referring to the use of abstract
symbols (i.e. the letters T, I, F, or their refined indexed letters Tj, Ik, Fl)
in neutrosophics. We extend the dialectical triad thesis-antithesis-synthesis
to the neutrosophic tetrad thesis-antithesis-neutrothesis-neutrosynthesis. The
we introduce the neutrosophic system that is a quasi or (t,i,f) classical
system, in the sense that the neutrosophic system deals with quasi-terms
(concepts, attributes, etc.). Then the notions of Neutrosophic Axiom,
Neutrosophic Deducibility, Degree of Contradiction (Dissimilarity) of Two
Neutrosophic Axioms, etc. Afterwards a new type of structures, called (t, i, f)
Neutrosophic Structures, and we show particular cases of such structures in
geometry and in algebra. Also, a short history of the neutrosophic set,
neutrosophic numerical components and neutrosophic literal components,
neutrosophic numbers, etc. We construct examples of splitting the literal
indeterminacy (I) into literal subindeterminacies (I1, I2, and so on, Ir), and
to define a multiplication law of these literal subindeterminacies in order to
be able to build refined I neutrosophic algebraic structures. We define three
neutrosophic actions and their properties. We then introduce the prevalence
order on T,I,F with respect to a given neutrosophic operator. And the
refinement of neutrosophic entities A, neutA, and antiA. Then we extend the
classical logical operators to neutrosophic literal (symbolic) logical
operators and to refined literal (symbolic) logical operators, and we define
the refinement neutrosophic literal (symbolic) space. We introduce the
neutrosophic quadruple numbers (a+bT+cI+dF) and the refined neutrosophic
quadruple numbers. Then we define an absorbance law, based on a prevalence
order, in order to multiply the neutrosophic quadruple numbers.
| [
{
"version": "v1",
"created": "Sun, 18 Oct 2015 00:32:31 GMT"
}
] | 1,449,014,400,000 | [
[
"Smarandache",
"Florentin",
""
]
] |
1512.00964 | Ryo Nakahashi | Ryo Nakahashi, Chris L. Baker, Joshua B. Tenenbaum | Modeling Human Understanding of Complex Intentional Action with a
Bayesian Nonparametric Subgoal Model | Accepted at AAAI 16 | Proceedings of 30th conference on artificial intelligence (AAAI
2016) pp. 3754--3760 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most human behaviors consist of multiple parts, steps, or subtasks. These
structures guide our action planning and execution, but when we observe others,
the latent structure of their actions is typically unobservable, and must be
inferred in order to learn new skills by demonstration, or to assist others in
completing their tasks. For example, an assistant who has learned the subgoal
structure of a colleague's task can more rapidly recognize and support their
actions as they unfold. Here we model how humans infer subgoals from
observations of complex action sequences using a nonparametric Bayesian model,
which assumes that observed actions are generated by approximately rational
planning over unknown subgoal sequences. We test this model with a behavioral
experiment in which humans observed different series of goal-directed actions,
and inferred both the number and composition of the subgoal sequences
associated with each goal. The Bayesian model predicts human subgoal inferences
with high accuracy, and significantly better than several alternative models
and straightforward heuristics. Motivated by this result, we simulate how
learning and inference of subgoals can improve performance in an artificial
user assistance task. The Bayesian model learns the correct subgoals from fewer
observations, and better assists users by more rapidly and accurately inferring
the goal of their actions than alternative approaches.
| [
{
"version": "v1",
"created": "Thu, 3 Dec 2015 06:44:35 GMT"
}
] | 1,538,092,800,000 | [
[
"Nakahashi",
"Ryo",
""
],
[
"Baker",
"Chris L.",
""
],
[
"Tenenbaum",
"Joshua B.",
""
]
] |
1512.00977 | Liu Feng | Feng Liu, Yong Shi | A Study on Artificial Intelligence IQ and Standard Intelligent Model | 16 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Currently, potential threats of artificial intelligence (AI) to human have
triggered a large controversy in society, behind which, the nature of the issue
is whether the artificial intelligence (AI) system can be evaluated
quantitatively. This article analyzes and evaluates the challenges that the AI
development level is facing, and proposes that the evaluation methods for the
human intelligence test and the AI system are not uniform; and the key reason
for which is that none of the models can uniformly describe the AI system and
the beings like human. Aiming at this problem, a standard intelligent system
model is established in this study to describe the AI system and the beings
like human uniformly. Based on the model, the article makes an abstract
mathematical description, and builds the standard intelligent machine
mathematical model; expands the Von Neumann architecture and proposes the
Liufeng - Shiyong architecture; gives the definition of the artificial
intelligence IQ, and establishes the artificial intelligence scale and the
evaluation method; conduct the test on 50 search engines and three human
subjects at different ages across the world, and finally obtains the ranking of
the absolute IQ and deviation IQ ranking for artificial intelligence IQ 2014.
| [
{
"version": "v1",
"created": "Thu, 3 Dec 2015 07:45:32 GMT"
}
] | 1,449,187,200,000 | [
[
"Liu",
"Feng",
""
],
[
"Shi",
"Yong",
""
]
] |
1512.01503 | Quang Minh Ha | Quang Minh Ha, Yves Deville, Quang Dung Pham, Minh Ho\`ang H\`a | On the Min-cost Traveling Salesman Problem with Drone | We proposed arXiv:1509.08764 as the first report about our research
on TSP-D. However due to a critical error in the experiment, we changed the
research approach and method and propose arXiv:1512.01503. Now it seems
arXiv:1509.08764 received new citations. we would like to withdraw
arXiv:1512.01503 and replaced arXiv:1509.08764 with our latest work | null | 10.1016/j.trc.2017.11.015 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Once known to be used exclusively in military domain, unmanned aerial
vehicles (drones) have stepped up to become a part of new logistic method in
commercial sector called "last-mile delivery". In this novel approach, small
unmanned aerial vehicles (UAV), also known as drones, are deployed alongside
with trucks to deliver goods to customers in order to improve the service
quality or reduce the transportation cost. It gives rise to a new variant of
the traveling salesman problem (TSP), of which we call TSP with drone (TSP-D).
In this article, we consider a variant of TSP-D where the main objective is to
minimize the total transportation cost. We also propose two heuristics: "Drone
First, Truck Second" (DFTS) and "Truck First, Drone Second" (TFDS), to
effectively solve the problem. The former constructs route for drone first
while the latter constructs route for truck first. We solve a TSP to generate
route for truck and propose a mixed integer programming (MIP) formulation with
different profit functions to build route for drone. Numerical results obtained
on many instances with different sizes and characteristics are presented.
Recommendations on promising algorithm choices are also provided.
| [
{
"version": "v1",
"created": "Fri, 4 Dec 2015 18:23:41 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Dec 2015 06:21:51 GMT"
},
{
"version": "v3",
"created": "Sun, 22 May 2016 17:06:40 GMT"
},
{
"version": "v4",
"created": "Thu, 26 May 2016 13:14:33 GMT"
}
] | 1,514,937,600,000 | [
[
"Ha",
"Quang Minh",
""
],
[
"Deville",
"Yves",
""
],
[
"Pham",
"Quang Dung",
""
],
[
"Hà",
"Minh Hoàng",
""
]
] |
1512.01915 | Guifei Jiang | Guifei Jiang and Dongmo Zhang and Laurent Perrussel | Knowledge Sharing in Coalitions | This version corrected errors in its previous version published at
AI'15 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to investigate the interplay between knowledge
shared by a group of agents and its coalition ability. We investigate this
relation in the standard context of imperfect information concurrent game. We
assume that whenever a set of agents form a coalition to achieve a goal, they
share their knowledge before acting. Based on this assumption, we propose a new
semantics for alternating-time temporal logic with imperfect information and
perfect recall. It turns out that this semantics is sufficient to preserve all
the desirable properties of coalition ability in traditional coalitional
logics. Meanwhile, we investigate how knowledge sharing within a group of
agents contributes to its coalitional ability through the interplay of
epistemic and coalition modalities. This work provides a partial answer to the
question: which kind of group knowledge is required for a group to achieve
their goals in the context of imperfect information.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 05:27:07 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2016 09:00:32 GMT"
}
] | 1,480,291,200,000 | [
[
"Jiang",
"Guifei",
""
],
[
"Zhang",
"Dongmo",
""
],
[
"Perrussel",
"Laurent",
""
]
] |
1512.02140 | A. Mani | A Mani | Contamination-Free Measures and Algebraic Operations | Preprint of FUZZIEEE'2013 Conference Paper | IEEE Xplore, 2013 | 10.1109/FUZZ-IEEE.2013.6622521 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An open concept of rough evolution and an axiomatic approach to granules was
also developed recently by the present author. Subsequently the concepts were
used in the formal framework of rough Y-systems (RYS) for developing on
granular correspondences by her. These have since been used for a new approach
towards comparison of rough algebraic semantics across different semantic
domains by way of correspondences that preserve rough evolution and try to
avoid contamination. In this research paper, new methods are proposed and a
semantics for handling possibly contaminated operations and structured bigness
is developed. These would also be of natural interest for relative consistency
of one collection of knowledge relative other.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 16:50:05 GMT"
}
] | 1,461,542,400,000 | [
[
"Mani",
"A",
""
]
] |
1512.02266 | Manuele Leonelli | Manuele Leonelli, Christiane G\"orgen and Jim Q. Smith | Sensitivity analysis, multilinearity and beyond | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensitivity methods for the analysis of the outputs of discrete Bayesian
networks have been extensively studied and implemented in different software
packages. These methods usually focus on the study of sensitivity functions and
on the impact of a parameter change to the Chan-Darwiche distance. Although not
fully recognized, the majority of these results heavily rely on the multilinear
structure of atomic probabilities in terms of the conditional probability
parameters associated with this type of network. By defining a statistical
model through the polynomial expression of its associated defining conditional
probabilities, we develop a unifying approach to sensitivity methods applicable
to a large suite of models including extensions of Bayesian networks, for
instance context-specific and dynamic ones, and chain event graphs. By then
focusing on models whose defining polynomial is multilinear, our algebraic
approach enables us to prove that the Chan-Darwiche distance is minimized for a
certain class of multi-parameter contemporaneous variations when parameters are
proportionally covaried.
| [
{
"version": "v1",
"created": "Mon, 7 Dec 2015 22:24:31 GMT"
},
{
"version": "v2",
"created": "Mon, 4 Jul 2016 15:39:27 GMT"
}
] | 1,467,676,800,000 | [
[
"Leonelli",
"Manuele",
""
],
[
"Görgen",
"Christiane",
""
],
[
"Smith",
"Jim Q.",
""
]
] |
1512.03020 | Hamidreza Chinaei | Hamidreza Chinaei, Mohsen Rais-Ghasem, Frank Rudzicz | Learning measures of semi-additive behaviour | 7 pages, 11 figures, 5 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In business analytics, measure values, such as sales numbers or volumes of
cargo transported, are often summed along values of one or more corresponding
categories, such as time or shipping container. However, not every measure
should be added by default (e.g., one might more typically want a mean over the
heights of a set of people); similarly, some measures should only be summed
within certain constraints (e.g., population measures need not be summed over
years). In systems such as Watson Analytics, the exact additive behaviour of a
measure is often determined by a human expert. In this work, we propose a small
set of features for this issue. We use these features in a case-based reasoning
approach, where the system suggests an aggregation behaviour, with 86% accuracy
in our collected dataset.
| [
{
"version": "v1",
"created": "Wed, 9 Dec 2015 19:52:55 GMT"
}
] | 1,449,705,600,000 | [
[
"Chinaei",
"Hamidreza",
""
],
[
"Rais-Ghasem",
"Mohsen",
""
],
[
"Rudzicz",
"Frank",
""
]
] |
1512.03516 | Madan Rao Mohan | A.M. Mohan Rao | Subsumptive reflection in SNOMED CT: a large description logic-based
terminology for diagnosis | 8 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Description logic (DL) based biomedical terminology (SNOMED CT) is used
routinely in medical practice. However, diagnostic inference using such
terminology is precluded by its complexity. Here we propose a model that
simplifies these inferential components. We propose three concepts that
classify clinical features and examined their effect on inference using SNOMED
CT. We used PAIRS (Physician Assistant Artificial Intelligence Reference
System) database (1964 findings for 485 disorders, 18 397 disease feature
links) for our analysis. We also use a 50-million medical word corpus for
estimating the vectors of disease-feature links. Our major results are 10% of
finding-disorder links are concomitant in both assertion and negation where as
90% are either concomitant in assertion or negation. Logical implications of
PAIRS data on SNOMED CT include 70% of the links do not share any common system
while 18% share organ and 12% share both system and organ. Applications of
these principles for inference are discussed and suggestions are made for
deriving a diagnostic process using SNOMED CT. Limitations of these processes
and suggestions for improvements are also discussed.
| [
{
"version": "v1",
"created": "Fri, 11 Dec 2015 04:27:50 GMT"
}
] | 1,450,051,200,000 | [
[
"Rao",
"A. M. Mohan",
""
]
] |
1512.04097 | Cristian Molinaro | Marco Calautti, Sergio Greco, Cristian Molinaro, Irina Trubitsyna | Using Linear Constraints for Logic Program Termination Analysis | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 16 (2016) 353-377 | 10.1017/S1471068416000077 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is widely acknowledged that function symbols are an important feature in
answer set programming, as they make modeling easier, increase the expressive
power, and allow us to deal with infinite domains. The main issue with their
introduction is that the evaluation of a program might not terminate and
checking whether it terminates or not is undecidable. To cope with this
problem, several classes of logic programs have been proposed where the use of
function symbols is restricted but the program evaluation termination is
guaranteed. Despite the significant body of work in this area, current
approaches do not include many simple practical programs whose evaluation
terminates. In this paper, we present the novel classes of rule-bounded and
cycle-bounded programs, which overcome different limitations of current
approaches by performing a more global analysis of how terms are propagated
from the body to the head of rules. Results on the correctness, the complexity,
and the expressivity of the proposed approach are provided.
| [
{
"version": "v1",
"created": "Sun, 13 Dec 2015 18:36:54 GMT"
},
{
"version": "v2",
"created": "Tue, 15 Dec 2015 13:15:04 GMT"
}
] | 1,582,070,400,000 | [
[
"Calautti",
"Marco",
""
],
[
"Greco",
"Sergio",
""
],
[
"Molinaro",
"Cristian",
""
],
[
"Trubitsyna",
"Irina",
""
]
] |
1512.04358 | Theodore Patkos | Theodore Patkos, Dimitris Plexousakis, Abdelghani Chibani, Yacine
Amirat | An Event Calculus Production Rule System for Reasoning in Dynamic and
Uncertain Domains | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 16 (2016) 325-352 | 10.1017/S1471068416000065 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Action languages have emerged as an important field of Knowledge
Representation for reasoning about change and causality in dynamic domains.
This article presents Cerbere, a production system designed to perform online
causal, temporal and epistemic reasoning based on the Event Calculus. The
framework implements the declarative semantics of the underlying logic theories
in a forward-chaining rule-based reasoning system, coupling the high
expressiveness of its formalisms with the efficiency of rule-based systems. To
illustrate its applicability, we present both the modeling of benchmark
problems in the field, as well as its utilization in the challenging domain of
smart spaces. A hybrid framework that combines logic-based with probabilistic
reasoning has been developed, that aims to accommodate activity recognition and
monitoring tasks in smart spaces. Under consideration in Theory and Practice of
Logic Programming (TPLP)
| [
{
"version": "v1",
"created": "Mon, 14 Dec 2015 15:18:58 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Dec 2015 17:57:42 GMT"
}
] | 1,582,070,400,000 | [
[
"Patkos",
"Theodore",
""
],
[
"Plexousakis",
"Dimitris",
""
],
[
"Chibani",
"Abdelghani",
""
],
[
"Amirat",
"Yacine",
""
]
] |
1512.04467 | Jeremie Guiochet | J\'er\'emie Guiochet (LAAS-TSF), Quynh Anh Do Hoang (LAAS-TSF),
Mohamed Kaaniche (LAAS-TSF) | A Model for Safety Case Confidence Assessment | null | 34th International Conference on Computer Safety, Reliability and
Security, Sep 2015, Delft, Netherlands. Springer, Lecture Notes in Computer
Science, Vol. 9337, Programming and Software Engineering, Springer, 2015,
http://safecomp2015.tudelft.nl/ | 10.1007/978-3-319-24255-2_23 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building a safety case is a common approach to make expert judgement explicit
about safety of a system. The issue of confidence in such argumentation is
still an open research field. Providing quantitative estimation of confidence
is an interesting approach to manage complexity of arguments. This paper
explores the main current approaches, and proposes a new model for quantitative
confidence estimation based on Belief Theory for its definition, and on
Bayesian Belief Networks for its propagation in safety case networks.
| [
{
"version": "v1",
"created": "Fri, 20 Nov 2015 15:24:22 GMT"
}
] | 1,450,137,600,000 | [
[
"Guiochet",
"Jérémie",
"",
"LAAS-TSF"
],
[
"Hoang",
"Quynh Anh Do",
"",
"LAAS-TSF"
],
[
"Kaaniche",
"Mohamed",
"",
"LAAS-TSF"
]
] |
1512.04652 | Mitra Montazeri | Mitra Montazeri, Mahdieh Soleymani Baghshah, Ahmad Enhesari | Hyper-Heuristic Algorithm for Finding Efficient Features in Diagnose of
Lung Cancer Disease | Published in the Journal of Basic and Applied Scientific Research,
2013 | J. Basic Appl. Sci. Res, 2013. 3(10): p. 134-140 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Lung cancer was known as primary cancers and the survival rate of
cancer is about 15%. Early detection of lung cancer is the leading factor in
survival rate. All symptoms (features) of lung cancer do not appear until the
cancer spreads to other areas. It needs an accurate early detection of lung
cancer, for increasing the survival rate. For accurate detection, it need
characterizes efficient features and delete redundancy features among all
features. Feature selection is the problem of selecting informative features
among all features. Materials and Methods: Lung cancer database consist of 32
patient records with 57 features. This database collected by Hong and Youngand
indexed in the University of California Irvine repository. Experimental
contents include the extracted from the clinical data and X-ray data, etc. The
data described 3 types of pathological lung cancers and all features are taking
an integer value 0-3. In our study, new method is proposed for identify
efficient features of lung cancer. It is based on Hyper-Heuristic. Results: We
obtained an accuracy of 80.63% using reduced 11 feature set. The proposed
method compare to the accuracy of 5 machine learning feature selections. The
accuracy of these 5 methods are 60.94, 57.81, 68.75, 60.94 and 68.75.
Conclusions: The proposed method has better performance with the highest level
of accuracy. Therefore, the proposed model is recommended for identifying an
efficient symptom of Disease. These finding are very important in health
research, particularly in allocation of medical resources for patients who
predicted as high-risks
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 05:15:07 GMT"
},
{
"version": "v2",
"created": "Sun, 24 Jan 2016 11:07:25 GMT"
}
] | 1,453,766,400,000 | [
[
"Montazeri",
"Mitra",
""
],
[
"Baghshah",
"Mahdieh Soleymani",
""
],
[
"Enhesari",
"Ahmad",
""
]
] |
1512.04976 | Adam Krasuski | Adam Krasuski | Conditions for Normative Decision Making at the Fire Ground | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the changes in an attitude to decision making at the fire ground.
The changes are driven by the recent technological shift. The emerging new
approaches in sensing and data processing (under common umbrella of
Cyber-Physical Systems) allow for leveling off the gap, between humans and
machines, in perception of the fire ground. Furthermore, results from
descriptive decision theory question the rationality of human choices. This
creates the need for searching and testing new approaches for decision making
during emergency. We propose the framework that addresses this need. The
primary feature of the framework are possibilities for incorporation of
normative and prescriptive approaches to decision making. The framework also
allows for comparison of the performance of decisions, between human and
machine.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 21:37:06 GMT"
}
] | 1,450,310,400,000 | [
[
"Krasuski",
"Adam",
""
]
] |
1512.05006 | Vikash Mansinghka | Vikash Mansinghka, Richard Tibbetts, Jay Baxter, Pat Shafto, Baxter
Eaves | BayesDB: A probabilistic programming system for querying the probable
implications of data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Is it possible to make statistical inference broadly accessible to
non-statisticians without sacrificing mathematical rigor or inference quality?
This paper describes BayesDB, a probabilistic programming platform that aims to
enable users to query the probable implications of their data as directly as
SQL databases enable them to query the data itself. This paper focuses on four
aspects of BayesDB: (i) BQL, an SQL-like query language for Bayesian data
analysis, that answers queries by averaging over an implicit space of
probabilistic models; (ii) techniques for implementing BQL using a broad class
of multivariate probabilistic models; (iii) a semi-parametric Bayesian
model-builder that auomatically builds ensembles of factorial mixture models to
serve as baselines; and (iv) MML, a "meta-modeling" language for imposing
qualitative constraints on the model-builder and combining baseline models with
custom algorithmic and statistical models that can be implemented in external
software. BayesDB is illustrated using three applications: cleaning and
exploring a public database of Earth satellites; assessing the evidence for
temporal dependence between macroeconomic indicators; and analyzing a salary
survey.
| [
{
"version": "v1",
"created": "Tue, 15 Dec 2015 23:09:41 GMT"
}
] | 1,450,310,400,000 | [
[
"Mansinghka",
"Vikash",
""
],
[
"Tibbetts",
"Richard",
""
],
[
"Baxter",
"Jay",
""
],
[
"Shafto",
"Pat",
""
],
[
"Eaves",
"Baxter",
""
]
] |
1512.05247 | Steven Schockaert | Sofie De Clercq, Steven Schockaert, Martine De Cock, Ann Now\'e | Solving stable matching problems using answer set programming | Under consideration in Theory and Practice of Logic Programming
(TPLP). arXiv admin note: substantial text overlap with arXiv:1302.7251 | Theory and Practice of Logic Programming 16 (2016) 247-268 | 10.1017/S147106841600003X | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the introduction of the stable marriage problem (SMP) by Gale and
Shapley (1962), several variants and extensions have been investigated. While
this variety is useful to widen the application potential, each variant
requires a new algorithm for finding the stable matchings. To address this
issue, we propose an encoding of the SMP using answer set programming (ASP),
which can straightforwardly be adapted and extended to suit the needs of
specific applications. The use of ASP also means that we can take advantage of
highly efficient off-the-shelf solvers. To illustrate the flexibility of our
approach, we show how our ASP encoding naturally allows us to select optimal
stable matchings, i.e. matchings that are optimal according to some
user-specified criterion. To the best of our knowledge, our encoding offers the
first exact implementation to find sex-equal, minimum regret, egalitarian or
maximum cardinality stable matchings for SMP instances in which individuals may
designate unacceptable partners and ties between preferences are allowed.
This paper is under consideration in Theory and Practice of Logic Programming
(TPLP).
| [
{
"version": "v1",
"created": "Wed, 16 Dec 2015 16:59:14 GMT"
}
] | 1,582,070,400,000 | [
[
"De Clercq",
"Sofie",
""
],
[
"Schockaert",
"Steven",
""
],
[
"De Cock",
"Martine",
""
],
[
"Nowé",
"Ann",
""
]
] |
1512.05484 | Mohsen Malmir | Mohsen Malmir, Karan Sikka, Deborah Forster, Ian Fasel, Javier R.
Movellan, Garrison W. Cottrell | Deep Active Object Recognition by Joint Label and Action Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An active object recognition system has the advantage of being able to act in
the environment to capture images that are more suited for training and that
lead to better performance at test time. In this paper, we propose a deep
convolutional neural network for active object recognition that simultaneously
predicts the object label, and selects the next action to perform on the object
with the aim of improving recognition performance. We treat active object
recognition as a reinforcement learning problem and derive the cost function to
train the network for joint prediction of the object label and the action. A
generative model of object similarities based on the Dirichlet distribution is
proposed and embedded in the network for encoding the state of the system. The
training is carried out by simultaneously minimizing the label and action
prediction errors using gradient descent. We empirically show that the proposed
network is able to predict both the object label and the actions on GERMS, a
dataset for active object recognition. We compare the test label prediction
accuracy of the proposed model with Dirichlet and Naive Bayes state encoding.
The results of experiments suggest that the proposed model equipped with
Dirichlet state encoding is superior in performance, and selects images that
lead to better training and higher accuracy of label prediction at test time.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 07:33:45 GMT"
}
] | 1,450,396,800,000 | [
[
"Malmir",
"Mohsen",
""
],
[
"Sikka",
"Karan",
""
],
[
"Forster",
"Deborah",
""
],
[
"Fasel",
"Ian",
""
],
[
"Movellan",
"Javier R.",
""
],
[
"Cottrell",
"Garrison W.",
""
]
] |
1512.05569 | Mohit Verma | Mohit Verma and J. Rajasankar | A thermodynamical approach towards multi-criteria decision making (MCDM) | null | Applied Soft Computing 2017, Volume 52, Pages 323--332 | 10.1016/j.asoc.2016.10.033 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In multi-criteria decision making (MCDM) problems, ratings are assigned to
the alternatives on different criteria by the expert group. In this paper, we
propose a thermodynamically consistent model for MCDM using the analogies for
thermodynamical indicators - energy, exergy and entropy. The most commonly used
method for analysing MCDM problem is Technique for Order of Preference by
Similarity to Ideal Solution (TOPSIS). The conventional TOPSIS method uses a
measure similar to that of energy for the ranking of alternatives. We
demonstrate that the ranking of the alternatives is more meaningful if we use
exergy in place of energy. The use of exergy is superior due to the inclusion
of a factor accounting for the quality of the ratings by the expert group. The
unevenness in the ratings by the experts is measured by entropy. The procedure
for the calculation of the thermodynamical indicators is explained in both
crisp and fuzzy environment. Finally, two case studies are carried out to
demonstrate effectiveness of the proposed model.
| [
{
"version": "v1",
"created": "Thu, 17 Dec 2015 13:02:36 GMT"
}
] | 1,490,659,200,000 | [
[
"Verma",
"Mohit",
""
],
[
"Rajasankar",
"J.",
""
]
] |
1512.05832 | Owain Evans | Owain Evans, Andreas Stuhlmueller, Noah D. Goodman | Learning the Preferences of Ignorant, Inconsistent Agents | AAAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important use of machine learning is to learn what people value. What
posts or photos should a user be shown? Which jobs or activities would a person
find rewarding? In each case, observations of people's past choices can inform
our inferences about their likes and preferences. If we assume that choices are
approximately optimal according to some utility function, we can treat
preference inference as Bayesian inverse planning. That is, given a prior on
utility functions and some observed choices, we invert an optimal
decision-making process to infer a posterior distribution on utility functions.
However, people often deviate from approximate optimality. They have false
beliefs, their planning is sub-optimal, and their choices may be temporally
inconsistent due to hyperbolic discounting and other biases. We demonstrate how
to incorporate these deviations into algorithms for preference inference by
constructing generative models of planning for agents who are subject to false
beliefs and time inconsistency. We explore the inferences these models make
about preferences, beliefs, and biases. We present a behavioral experiment in
which human subjects perform preference inference given the same observations
of choices as our model. Results show that human subjects (like our model)
explain choices in terms of systematic deviations from optimal behavior and
suggest that they take such deviations into account when inferring preferences.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 00:24:08 GMT"
}
] | 1,450,656,000,000 | [
[
"Evans",
"Owain",
""
],
[
"Stuhlmueller",
"Andreas",
""
],
[
"Goodman",
"Noah D.",
""
]
] |
1512.05849 | Miles Brundage | Miles Brundage | Modeling Progress in AI | AAAI 2016 Workshop on AI, Ethics, and Society | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Participants in recent discussions of AI-related issues ranging from
intelligence explosion to technological unemployment have made diverse claims
about the nature, pace, and drivers of progress in AI. However, these theories
are rarely specified in enough detail to enable systematic evaluation of their
assumptions or to extrapolate progress quantitatively, as is often done with
some success in other technological domains. After reviewing relevant
literatures and justifying the need for more rigorous modeling of AI progress,
this paper contributes to that research program by suggesting ways to account
for the relationship between hardware speed increases and algorithmic
improvements in AI, the role of human inputs in enabling AI capabilities, and
the relationships between different sub-fields of AI. It then outlines ways of
tailoring AI progress models to generate insights on the specific issue of
technological unemployment, and outlines future directions for research on AI
progress.
| [
{
"version": "v1",
"created": "Fri, 18 Dec 2015 04:17:39 GMT"
}
] | 1,450,656,000,000 | [
[
"Brundage",
"Miles",
""
]
] |
1512.06211 | Agnieszka Lawrynowicz | C. Maria Keet and Agnieszka Lawrynowicz | Test-Driven Development of ontologies (extended version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emerging ontology authoring methods to add knowledge to an ontology focus on
ameliorating the validation bottleneck. The verification of the newly added
axiom is still one of trying and seeing what the reasoner says, because a
systematic testbed for ontology authoring is missing. We sought to address this
by introducing the approach of test-driven development for ontology authoring.
We specify 36 generic tests, as TBox queries and TBox axioms tested through
individuals, and structure their inner workings in an `open box'-way, which
cover the OWL 2 DL language features. This is implemented as a Protege plugin
so that one can perform a TDD test as a black box test. We evaluated the two
test approaches on their performance. The TBox queries were faster, and that
effect is more pronounced the larger the ontology is. We provide a general
sequence of a TDD process for ontology engineering as a foundation for a TDD
methodology.
| [
{
"version": "v1",
"created": "Sat, 19 Dec 2015 09:15:24 GMT"
}
] | 1,450,742,400,000 | [
[
"Keet",
"C. Maria",
""
],
[
"Lawrynowicz",
"Agnieszka",
""
]
] |
1512.06747 | Skyler Seto | Skyler Seto, Wenyu Zhang, Yichen Zhou | Multivariate Time Series Classification Using Dynamic Time Warping
Template Selection for Human Activity Recognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and computationally efficient means for classifying human activities
have been the subject of extensive research efforts. Most current research
focuses on extracting complex features to achieve high classification accuracy.
We propose a template selection approach based on Dynamic Time Warping, such
that complex feature extraction and domain knowledge is avoided. We demonstrate
the predictive capability of the algorithm on both simulated and real
smartphone data.
| [
{
"version": "v1",
"created": "Mon, 21 Dec 2015 18:36:53 GMT"
}
] | 1,450,742,400,000 | [
[
"Seto",
"Skyler",
""
],
[
"Zhang",
"Wenyu",
""
],
[
"Zhou",
"Yichen",
""
]
] |
1512.07048 | Roel Bertens | Roel Bertens and Jilles Vreeken and Arno Siebes | Beauty and Brains: Detecting Anomalous Pattern Co-Occurrences | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our world is filled with both beautiful and brainy people, but how often does
a Nobel Prize winner also wins a beauty pageant? Let us assume that someone who
is both very beautiful and very smart is more rare than what we would expect
from the combination of the number of beautiful and brainy people. Of course
there will still always be some individuals that defy this stereotype; these
beautiful brainy people are exactly the class of anomaly we focus on in this
paper. They do not posses intrinsically rare qualities, it is the unexpected
combination of factors that makes them stand out.
In this paper we define the above described class of anomaly and propose a
method to quickly identify them in transaction data. Further, as we take a
pattern set based approach, our method readily explains why a transaction is
anomalous. The effectiveness of our method is thoroughly verified with a wide
range of experiments on both real world and synthetic data.
| [
{
"version": "v1",
"created": "Tue, 22 Dec 2015 12:15:12 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 15:55:56 GMT"
}
] | 1,455,148,800,000 | [
[
"Bertens",
"Roel",
""
],
[
"Vreeken",
"Jilles",
""
],
[
"Siebes",
"Arno",
""
]
] |
1512.07056 | Roel Bertens | Roel Bertens and Jilles Vreeken and Arno Siebes | Keeping it Short and Simple: Summarising Complex Event Sequences with
Multivariate Patterns | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study how to obtain concise descriptions of discrete multivariate
sequential data. In particular, how to do so in terms of rich multivariate
sequential patterns that can capture potentially highly interesting
(cor)relations between sequences. To this end we allow our pattern language to
span over the domains (alphabets) of all sequences, allow patterns to overlap
temporally, as well as allow for gaps in their occurrences.
We formalise our goal by the Minimum Description Length principle, by which
our objective is to discover the set of patterns that provides the most
succinct description of the data. To discover high-quality pattern sets
directly from data, we introduce DITTO, a highly efficient algorithm that
approximates the ideal result very well.
Experiments show that DITTO correctly discovers the patterns planted in
synthetic data. Moreover, it scales favourably with the length of the data, the
number of attributes, the alphabet sizes. On real data, ranging from sensor
networks to annotated text, DITTO discovers easily interpretable summaries that
provide clear insight in both the univariate and multivariate structure.
| [
{
"version": "v1",
"created": "Tue, 22 Dec 2015 12:35:32 GMT"
},
{
"version": "v2",
"created": "Wed, 10 Feb 2016 16:19:05 GMT"
}
] | 1,455,148,800,000 | [
[
"Bertens",
"Roel",
""
],
[
"Vreeken",
"Jilles",
""
],
[
"Siebes",
"Arno",
""
]
] |
1512.07162 | Xi'ao Ma | Xi'ao Ma, Guoyin Wang, Hong Yu | Heuristic algorithms for finding distribution reducts in probabilistic
rough set model | 44 pages, 24 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Attribute reduction is one of the most important topics in rough set theory.
Heuristic attribute reduction algorithms have been presented to solve the
attribute reduction problem. It is generally known that fitness functions play
a key role in developing heuristic attribute reduction algorithms. The
monotonicity of fitness functions can guarantee the validity of heuristic
attribute reduction algorithms. In probabilistic rough set model, distribution
reducts can ensure the decision rules derived from the reducts are compatible
with those derived from the original decision table. However, there are few
studies on developing heuristic attribute reduction algorithms for finding
distribution reducts. This is partly due to the fact that there are no
monotonic fitness functions that are used to design heuristic attribute
reduction algorithms in probabilistic rough set model. The main objective of
this paper is to develop heuristic attribute reduction algorithms for finding
distribution reducts in probabilistic rough set model. For one thing, two
monotonic fitness functions are constructed, from which equivalence definitions
of distribution reducts can be obtained. For another, two modified monotonic
fitness functions are proposed to evaluate the significance of attributes more
effectively. On this basis, two heuristic attribute reduction algorithms for
finding distribution reducts are developed based on addition-deletion method
and deletion method. In particular, the monotonicity of fitness functions
guarantees the rationality of the proposed heuristic attribute reduction
algorithms. Results of experimental analysis are included to quantify the
effectiveness of the proposed fitness functions and distribution reducts.
| [
{
"version": "v1",
"created": "Tue, 22 Dec 2015 17:17:45 GMT"
}
] | 1,450,828,800,000 | [
[
"Ma",
"Xi'ao",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Yu",
"Hong",
""
]
] |
1512.07721 | Sam Fletcher | Sam Fletcher, Md Zahidul Islam | Measuring pattern retention in anonymized data -- where one measure is
not enough | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we explore how modifying data to preserve privacy affects the
quality of the patterns discoverable in the data. For any analysis of modified
data to be worth doing, the data must be as close to the original as possible.
Therein lies a problem -- how does one make sure that modified data still
contains the information it had before modification? This question is not the
same as asking if an accurate classifier can be built from the modified data.
Often in the literature, the prediction accuracy of a classifier made from
modified (anonymized) data is used as evidence that the data is similar to the
original. We demonstrate that this is not the case, and we propose a new
methodology for measuring the retention of the patterns that existed in the
original data. We then use our methodology to design three measures that can be
easily implemented, each measuring aspects of the data that no pre-existing
techniques can measure. These measures do not negate the usefulness of
prediction accuracy or other measures -- they are complementary to them, and
support our argument that one measure is almost never enough.
| [
{
"version": "v1",
"created": "Thu, 24 Dec 2015 05:36:02 GMT"
}
] | 1,451,001,600,000 | [
[
"Fletcher",
"Sam",
""
],
[
"Islam",
"Md Zahidul",
""
]
] |
1512.07931 | Hugh Chen | Hugh Chen, Yusuf Erol, Eric Shen, Stuart Russell | Probabilistic Model-Based Approach for Heart Beat Detection | null | null | 10.1088/0967-3334/37/9/1404 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays, hospitals are ubiquitous and integral to modern society. Patients
flow in and out of a veritable whirlwind of paperwork, consultations, and
potential inpatient admissions, through an abstracted system that is not
without flaws. One of the biggest flaws in the medical system is perhaps an
unexpected one: the patient alarm system. One longitudinal study reported an
88.8% rate of false alarms, with other studies reporting numbers of similar
magnitudes. These false alarm rates lead to a number of deleterious effects
that manifest in a significantly lower standard of care across clinics.
This paper discusses a model-based probabilistic inference approach to
identifying variables at a detection level. We design a generative model that
complies with an overview of human physiology and perform approximate Bayesian
inference. One primary goal of this paper is to justify a Bayesian modeling
approach to increasing robustness in a physiological domain.
We use three data sets provided by Physionet, a research resource for complex
physiological signals, in the form of the Physionet 2014 Challenge set-p1 and
set-p2, as well as the MGH/MF Waveform Database. On the extended data set our
algorithm is on par with the other top six submissions to the Physionet 2014
challenge.
| [
{
"version": "v1",
"created": "Thu, 24 Dec 2015 23:24:24 GMT"
}
] | 1,474,416,000,000 | [
[
"Chen",
"Hugh",
""
],
[
"Erol",
"Yusuf",
""
],
[
"Shen",
"Eric",
""
],
[
"Russell",
"Stuart",
""
]
] |
1512.07943 | Alexander Kott | Alexander Kott, Michael Ownby | Toward a Research Agenda in Adversarial Reasoning: Computational
Approaches to Anticipating the Opponent's Intent and Actions | A version of this paper was presented at the SPIE Symposium on
Enabling Technologies for Simulation Science | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper defines adversarial reasoning as computational approaches to
inferring and anticipating an enemy's perceptions, intents and actions. It
argues that adversarial reasoning transcends the boundaries of game theory and
must also leverage such disciplines as cognitive modeling, control theory, AI
planning and others. To illustrate the challenges of applying adversarial
reasoning to real-world problems, the paper explores the lessons learned in the
CADET - a battle planning system that focuses on brigade-level ground
operations and involves adversarial reasoning. From this example of current
capabilities, the paper proceeds to describe RAID - a DARPA program that aims
to build capabilities in adversarial reasoning, and how such capabilities would
address practical requirements in Defense and other application areas.
| [
{
"version": "v1",
"created": "Fri, 25 Dec 2015 01:27:55 GMT"
}
] | 1,451,347,200,000 | [
[
"Kott",
"Alexander",
""
],
[
"Ownby",
"Michael",
""
]
] |
1512.08525 | Khalifeh AlJadda | Khalifeh AlJadda, Mohammed Korayem, Camilo Ortiz, Trey Grainger, John
A. Miller, Khaled Rasheed, Krys J. Kochut, William S. York, Rene Ranzinger,
Melody Porterfield | Mining Massive Hierarchical Data Using a Scalable Probabilistic
Graphical Model | To be submitted to Big Data Journal. arXiv admin note: substantial
text overlap with arXiv:1407.5656 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic Graphical Models (PGM) are very useful in the fields of machine
learning and data mining. The crucial limitation of those models,however, is
the scalability. The Bayesian Network, which is one of the most common PGMs
used in machine learning and data mining, demonstrates this limitation when the
training data consists of random variables, each of them has a large set of
possible values. In the big data era, one would expect new extensions to the
existing PGMs to handle the massive amount of data produced these days by
computers, sensors and other electronic devices. With hierarchical data - data
that is arranged in a treelike structure with several levels - one would expect
to see hundreds of thousands or millions of values distributed over even just a
small number of levels. When modeling this kind of hierarchical data across
large data sets, Bayesian Networks become infeasible for representing the
probability distributions. In this paper we introduce an extension to Bayesian
Networks to handle massive sets of hierarchical data in a reasonable amount of
time and space. The proposed model achieves perfect precision of 1.0 and high
recall of 0.93 when it is used as multi-label classifier for the annotation of
mass spectrometry data. On another data set of 1.5 billion search logs provided
by CareerBuilder.com the model was able to predict latent semantic
relationships between search keywords with accuracy up to 0.80.
| [
{
"version": "v1",
"created": "Mon, 28 Dec 2015 21:02:20 GMT"
}
] | 1,451,520,000,000 | [
[
"AlJadda",
"Khalifeh",
""
],
[
"Korayem",
"Mohammed",
""
],
[
"Ortiz",
"Camilo",
""
],
[
"Grainger",
"Trey",
""
],
[
"Miller",
"John A.",
""
],
[
"Rasheed",
"Khaled",
""
],
[
"Kochut",
"Krys J.",
""
],
[
"York",
"William S.",
""
],
[
"Ranzinger",
"Rene",
""
],
[
"Porterfield",
"Melody",
""
]
] |
1512.08553 | Wolfgang Garn | Wolfgang Garn and Panos Louvieris | Conditional probability generation methods for high reliability
effects-based decision making | 18 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision making is often based on Bayesian networks. The building blocks for
Bayesian networks are its conditional probability tables (CPTs). These tables
are obtained by parameter estimation methods, or they are elicited from subject
matter experts (SME). Some of these knowledge representations are insufficient
approximations. Using knowledge fusion of cause and effect observations lead to
better predictive decisions. We propose three new methods to generate CPTs,
which even work when only soft evidence is provided. The first two are novel
ways of mapping conditional expectations to the probability space. The third is
a column extraction method, which obtains CPTs from nonlinear functions such as
the multinomial logistic regression. Case studies on military effects and burnt
forest desertification have demonstrated that so derived CPTs have highly
reliable predictive power, including superiority over the CPTs obtained from
SMEs. In this context, new quality measures for determining the goodness of a
CPT and for comparing CPTs with each other have been introduced. The predictive
power and enhanced reliability of decision making based on the novel CPT
generation methods presented in this paper have been confirmed and validated
within the context of the case studies.
| [
{
"version": "v1",
"created": "Mon, 28 Dec 2015 23:08:30 GMT"
}
] | 1,451,520,000,000 | [
[
"Garn",
"Wolfgang",
""
],
[
"Louvieris",
"Panos",
""
]
] |
1512.08811 | Piotr Szwed PhD | Piotr Szwed | Combining Fuzzy Cognitive Maps and Discrete Random Variables | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose an extension to the Fuzzy Cognitive Maps (FCMs) that
aims at aggregating a number of reasoning tasks into a one parallel run. The
described approach consists in replacing real-valued activation levels of
concepts (and further influence weights) by random variables. Such extension,
followed by the implemented software tool, allows for determining ranges
reached by concept activation levels, sensitivity analysis as well as
statistical analysis of multiple reasoning results. We replace multiplication
and addition operators appearing in the FCM state equation by appropriate
convolutions applicable for discrete random variables. To make the model
computationally feasible, it is further augmented with aggregation operations
for discrete random variables. We discuss four implemented aggregators, as well
as we report results of preliminary tests.
| [
{
"version": "v1",
"created": "Tue, 29 Dec 2015 22:41:28 GMT"
}
] | 1,451,520,000,000 | [
[
"Szwed",
"Piotr",
""
]
] |
1512.08899 | Peter Sch\"uller | Peter Sch\"uller | Modeling Variations of First-Order Horn Abduction in Answer Set
Programming | Technical Report | Fundamenta Informaticae, vol. 149, no. 1-2, pp. 159-207, 2016 | 10.3233/FI-2016-1446 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We study abduction in First Order Horn logic theories where all atoms can be
abduced and we are looking for preferred solutions with respect to three
objective functions: cardinality minimality, coherence, and weighted abduction.
We represent this reasoning problem in Answer Set Programming (ASP), in order
to obtain a flexible framework for experimenting with global constraints and
objective functions, and to test the boundaries of what is possible with ASP.
Realizing this problem in ASP is challenging as it requires value invention and
equivalence between certain constants, because the Unique Names Assumption does
not hold in general. To permit reasoning in cyclic theories, we formally
describe fine-grained variations of limiting Skolemization. We identify term
equivalence as a main instantiation bottleneck, and improve the efficiency of
our approach with on-demand constraints that were used to eliminate the same
bottleneck in state-of-the-art solvers. We evaluate our approach experimentally
on the ACCEL benchmark for plan recognition in Natural Language Understanding.
Our encodings are publicly available, modular, and our approach is more
efficient than state-of-the-art solvers on the ACCEL benchmark.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2015 10:22:14 GMT"
},
{
"version": "v2",
"created": "Mon, 20 Jun 2016 13:26:15 GMT"
},
{
"version": "v3",
"created": "Sun, 30 Oct 2016 14:00:03 GMT"
},
{
"version": "v4",
"created": "Wed, 31 Jan 2018 18:39:07 GMT"
}
] | 1,517,443,200,000 | [
[
"Schüller",
"Peter",
""
]
] |
1512.08969 | Josef Moudrik | Josef Moud\v{r}\'ik, Petr Baudi\v{s}, Roman Neruda | Evaluating Go Game Records for Prediction of Player Attributes | null | Computational Intelligence and Games (CIG), 2015 IEEE Conference
on , vol., no., pp.162-168, Aug. 31 2015-Sept. 2 2015 | 10.1109/CIG.2015.7317909 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a way of extracting and aggregating per-move evaluations from sets
of Go game records. The evaluations capture different aspects of the games such
as played patterns or statistic of sente/gote sequences. Using machine learning
algorithms, the evaluations can be utilized to predict different relevant
target variables. We apply this methodology to predict the strength and playing
style of the player (e.g. territoriality or aggressivity) with good accuracy.
We propose a number of possible applications including aiding in Go study,
seeding real-work ranks of internet players or tuning of Go-playing programs.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2015 15:09:51 GMT"
}
] | 1,451,520,000,000 | [
[
"Moudřík",
"Josef",
""
],
[
"Baudiš",
"Petr",
""
],
[
"Neruda",
"Roman",
""
]
] |
1512.09075 | Philip Thomas | Philip S. Thomas and Billy Okal | A Notation for Markov Decision Processes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper specifies a notation for Markov decision processes.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2015 19:34:01 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Sep 2016 14:30:43 GMT"
}
] | 1,473,379,200,000 | [
[
"Thomas",
"Philip S.",
""
],
[
"Okal",
"Billy",
""
]
] |
1512.09254 | Josef Moudrik | Josef Moud\v{r}\'ik, Roman Neruda | Evolving Non-linear Stacking Ensembles for Prediction of Go Player
Attributes | Published in 2015 IEEE Symposium Series on Computational Intelligence | null | 10.1109/SSCI.2015.235 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper presents an application of non-linear stacking ensembles for
prediction of Go player attributes. An evolutionary algorithm is used to form a
diverse ensemble of base learners, which are then aggregated by a stacking
ensemble. This methodology allows for an efficient prediction of different
attributes of Go players from sets of their games. These attributes can be
fairly general, in this work, we used the strength and style of the players.
| [
{
"version": "v1",
"created": "Thu, 31 Dec 2015 10:37:04 GMT"
}
] | 1,506,297,600,000 | [
[
"Moudřík",
"Josef",
""
],
[
"Neruda",
"Roman",
""
]
] |
1601.00367 | Pascal Van Hentenryck | Arthur Maheo, Philip Kilby, Pascal Van Hentenryck | Benders Decomposition for the Design of a Hub and Shuttle Public Transit
System | null | null | 10.1287/trsc.2017.0756 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The BusPlus project aims at improving the off-peak hours public transit
service in Canberra, Australia. To address the difficulty of covering a large
geographic area, BusPlus proposes a hub and shuttle model consisting of a
combination of a few high-frequency bus routes between key hubs and a large
number of shuttles that bring passengers from their origin to the closest hub
and take them from their last bus stop to their destination. This paper focuses
on the design of bus network and proposes an efficient solving method to this
multimodal network design problem based on the Benders decomposition method.
Starting from a MIP formulation of the problem, the paper presents a Benders
decomposition approach using dedicated solution techniques for solving
independent sub-problems, Pareto optimal cuts, cut bundling, and core point
update. Computational results on real-world data from Canberra's public transit
system justify the design choices and show that the approach outperforms the
MIP formulation by two orders of magnitude. Moreover, the results show that the
hub and shuttle model may decrease transit time by a factor of 2, while staying
within the costs of the existing transit system.
| [
{
"version": "v1",
"created": "Wed, 30 Dec 2015 23:26:47 GMT"
}
] | 1,562,025,600,000 | [
[
"Maheo",
"Arthur",
""
],
[
"Kilby",
"Philip",
""
],
[
"Van Hentenryck",
"Pascal",
""
]
] |
1601.00529 | Fariba Sadri Dr. | Robert Kowalski and Fariba Sadri | Programming in logic without logic programming | Under consideration in Theory and Practice of Logic Programming
(TPLP) | Theory and Practice of Logic Programming 16 (2016) 269-295 | 10.1017/S1471068416000041 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous work, we proposed a logic-based framework in which computation is
the execution of actions in an attempt to make reactive rules of the form if
antecedent then consequent true in a canonical model of a logic program
determined by an initial state, sequence of events, and the resulting sequence
of subsequent states. In this model-theoretic semantics, reactive rules are the
driving force, and logic programs play only a supporting role.
In the canonical model, states, actions and other events are represented with
timestamps. But in the operational semantics, for the sake of efficiency,
timestamps are omitted and only the current state is maintained. State
transitions are performed reactively by executing actions to make the
consequents of rules true whenever the antecedents become true. This
operational semantics is sound, but incomplete. It cannot make reactive rules
true by preventing their antecedents from becoming true, or by proactively
making their consequents true before their antecedents become true.
In this paper, we characterize the notion of reactive model, and prove that
the operational semantics can generate all and only such models. In order to
focus on the main issues, we omit the logic programming component of the
framework.
| [
{
"version": "v1",
"created": "Mon, 4 Jan 2016 15:09:38 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Jan 2016 15:06:29 GMT"
}
] | 1,582,070,400,000 | [
[
"Kowalski",
"Robert",
""
],
[
"Sadri",
"Fariba",
""
]
] |
1601.00669 | Antonio Lieto | Agnese Augello, Ignazio Infantino, Antonio Lieto, Giovanni Pilato,
Riccardo Rizzo, Filippo Vella | Artwork creation by a cognitive architecture integrating computational
creativity and dual process approaches | 30 pages, 8 figures, to appear in Biologically Inspired Cognitive
Architectures 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper proposes a novel cognitive architecture (CA) for computational
creativity based on the Psi model and on the mechanisms inspired by dual
process theories of reasoning and rationality. In recent years, many cognitive
models have focused on dual process theories to better describe and implement
complex cognitive skills in artificial agents, but creativity has been
approached only at a descriptive level. In previous works we have described
various modules of the cognitive architecture that allows a robot to execute
creative paintings. By means of dual process theories we refine some relevant
mechanisms to obtain artworks, and in particular we explain details about the
resolution level of the CA dealing with different strategies of access to the
Long Term Memory (LTM) and managing the interaction between S1 and S2 processes
of the dual process theory. The creative process involves both divergent and
convergent processes in either implicit or explicit manner. This leads to four
activities (exploratory, reflective, tacit, and analytic) that, triggered by
urges and motivations, generate creative acts. These creative acts exploit both
the LTM and the WM in order to make novel substitutions to a perceived image by
properly mixing parts of pictures coming from different domains. The paper
highlights the role of the interaction between S1 and S2 processes, modulated
by the resolution level, which focuses the attention of the creative agent by
broadening or narrowing the exploration of novel solutions, or even drawing the
solution from a set of already made associations. An example of artificial
painter is described in some experimentations by using a robotic platform.
| [
{
"version": "v1",
"created": "Mon, 4 Jan 2016 21:24:48 GMT"
}
] | 1,452,038,400,000 | [
[
"Augello",
"Agnese",
""
],
[
"Infantino",
"Ignazio",
""
],
[
"Lieto",
"Antonio",
""
],
[
"Pilato",
"Giovanni",
""
],
[
"Rizzo",
"Riccardo",
""
],
[
"Vella",
"Filippo",
""
]
] |
1601.01635 | Dmytro Terletskyi | D. A. Terletskyi, A. I. Provotar | Fuzzy Object-Oriented Dynamic Networks. I | null | Cybernetics and Systems Analysis, 2015, Volume 51, Issue 1, pp
34-40 | 10.1007/s10559-015-9694-0 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concepts of fuzzy objects and their classes are described that make it
possible to structurally represent knowledge about fuzzy and partially-defined
objects and their classes. Operations over such objects and classes are also
proposed that make it possible to obtain sets and new classes of fuzzy objects
and also to model variations in object structures under the influence of
external factors.
| [
{
"version": "v1",
"created": "Thu, 7 Jan 2016 18:39:55 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 19:22:47 GMT"
}
] | 1,455,667,200,000 | [
[
"Terletskyi",
"D. A.",
""
],
[
"Provotar",
"A. I.",
""
]
] |
1601.02745 | Xiaodong He | Paul Smolensky, Moontae Lee, Xiaodong He, Wen-tau Yih, Jianfeng Gao,
Li Deng | Basic Reasoning with Tensor Product Representations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the initial development of a general theory for
mapping inference in predicate logic to computation over Tensor Product
Representations (TPRs; Smolensky (1990), Smolensky & Legendre (2006)). After an
initial brief synopsis of TPRs (Section 0), we begin with particular examples
of inference with TPRs in the 'bAbI' question-answering task of Weston et al.
(2015) (Section 1). We then present a simplification of the general analysis
that suffices for the bAbI task (Section 2). Finally, we lay out the general
treatment of inference over TPRs (Section 3). We also show the simplification
in Section 2 derives the inference methods described in Lee et al. (2016); this
shows how the simple methods of Lee et al. (2016) can be formally extended to
more general reasoning tasks.
| [
{
"version": "v1",
"created": "Tue, 12 Jan 2016 06:44:54 GMT"
}
] | 1,452,643,200,000 | [
[
"Smolensky",
"Paul",
""
],
[
"Lee",
"Moontae",
""
],
[
"He",
"Xiaodong",
""
],
[
"Yih",
"Wen-tau",
""
],
[
"Gao",
"Jianfeng",
""
],
[
"Deng",
"Li",
""
]
] |
1601.02865 | Peter Nightingale | Peter Nightingale and Andrea Rendl | Essence' Description | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A description of the Essence' language as used by the tool Savile Row.
| [
{
"version": "v1",
"created": "Tue, 12 Jan 2016 14:05:35 GMT"
}
] | 1,452,643,200,000 | [
[
"Nightingale",
"Peter",
""
],
[
"Rendl",
"Andrea",
""
]
] |
1601.03065 | Igor Subbotin | Igor Ya. Subbotin, Michael Gr. Voskoglou | An Application of the Generalized Rectangular Fuzzy Model to Critical
Thinking Assessment | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The authors apply the Generalized Rectangular Model to assessing critical
thinking skills and its relations with their language competency.
| [
{
"version": "v1",
"created": "Fri, 8 Jan 2016 18:18:36 GMT"
}
] | 1,452,729,600,000 | [
[
"Subbotin",
"Igor Ya.",
""
],
[
"Voskoglou",
"Michael Gr.",
""
]
] |
1601.03785 | Regivan Santiago | A. Diego S. Farias, Valdigleis S. Costa, Luiz Ranyer A. Lopes,
Benjam\'in Bedregal and Regivan Santiago | A Method for Image Reduction Based on a Generalization of Ordered
Weighted Averaging Functions | 32 pages, 19 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we propose a special type of aggregation function which
generalizes the notion of Ordered Weighted Averaging Function - OWA. The
resulting functions are called Dynamic Ordered Weighted Averaging Functions ---
DYOWAs. This generalization will be developed in such way that the weight
vectors are variables depending on the input vector. Particularly, this
operators generalize the aggregation functions: Minimum, Maximum, Arithmetic
Mean, Median, etc, which are extensively used in image processing. In this
field of research two problems are considered: The determination of methods to
reduce images and the construction of techniques which provide noise reduction.
The operators described here are able to be used in both cases. In terms of
image reduction we apply the methodology provided by Patermain et al. We use
the noise reduction operators obtained here to treat the images obtained in the
first part of the paper, thus obtaining images with better quality.
| [
{
"version": "v1",
"created": "Fri, 15 Jan 2016 00:13:33 GMT"
}
] | 1,453,075,200,000 | [
[
"Farias",
"A. Diego S.",
""
],
[
"Costa",
"Valdigleis S.",
""
],
[
"Lopes",
"Luiz Ranyer A.",
""
],
[
"Bedregal",
"Benjamín",
""
],
[
"Santiago",
"Regivan",
""
]
] |
1601.04105 | Mohsen Taheriyan | Mohsen Taheriyan, Craig A. Knoblock, Pedro Szekely, Jose Luis Ambite | Learning the Semantics of Structured Data Sources | Web Semantics: Science, Services and Agents on the World Wide Web,
2016 | null | 10.1016/j.websem.2015.12.003 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information sources such as relational databases, spreadsheets, XML, JSON,
and Web APIs contain a tremendous amount of structured data that can be
leveraged to build and augment knowledge graphs. However, they rarely provide a
semantic model to describe their contents. Semantic models of data sources
represent the implicit meaning of the data by specifying the concepts and the
relationships within the data. Such models are the key ingredients to
automatically publish the data into knowledge graphs. Manually modeling the
semantics of data sources requires significant effort and expertise, and
although desirable, building these models automatically is a challenging
problem. Most of the related work focuses on semantic annotation of the data
fields (source attributes). However, constructing a semantic model that
explicitly describes the relationships between the attributes in addition to
their semantic types is critical.
We present a novel approach that exploits the knowledge from a domain
ontology and the semantic models of previously modeled sources to automatically
learn a rich semantic model for a new source. This model represents the
semantics of the new source in terms of the concepts and relationships defined
by the domain ontology. Given some sample data from the new source, we leverage
the knowledge in the domain ontology and the known semantic models to construct
a weighted graph that represents the space of plausible semantic models for the
new source. Then, we compute the top k candidate semantic models and suggest to
the user a ranked list of the semantic models for the new source. The approach
takes into account user corrections to learn more accurate semantic models on
future data sources. Our evaluation shows that our method generates expressive
semantic models for data sources and services with minimal user input. ...
| [
{
"version": "v1",
"created": "Sat, 16 Jan 2016 00:55:25 GMT"
}
] | 1,453,161,600,000 | [
[
"Taheriyan",
"Mohsen",
""
],
[
"Knoblock",
"Craig A.",
""
],
[
"Szekely",
"Pedro",
""
],
[
"Ambite",
"Jose Luis",
""
]
] |
1601.06069 | Alexander Kott | Larry Ground, Alexander Kott, Ray Budd | Coalition-based Planning of Military Operations: Adversarial Reasoning
Algorithms in an Integrated Decision Aid | A version of this paper appeared in proceedings of the 2002
International Conference on Knowledge Systems for Coalition Operations (KSCO) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Use of knowledge-based planning tools can help alleviate the challenges of
planning a complex operation by a coalition of diverse parties in an
adversarial environment. We explore these challenges and potential
contributions of knowledge-based tools using as an example the CADET system, a
knowledge-based tool capable of producing automatically (or with human
guidance) battle plans with realistic degree of detail and complexity. In
ongoing experiments, it compared favorably with human planners. Interleaved
planning, scheduling, routing, attrition and consumption processes comprise the
computational approach of this tool. From the coalition operations perspective,
such tools offer an important aid in rapid synchronization of assets and
actions of heterogeneous assets belonging to multiple organizations,
potentially with distinct doctrine and rules of engagement. In this paper, we
discuss the functionality of the tool, provide a brief overview of the
technical approach and experimental results, and outline the potential value of
such tools.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 16:53:45 GMT"
}
] | 1,453,680,000,000 | [
[
"Ground",
"Larry",
""
],
[
"Kott",
"Alexander",
""
],
[
"Budd",
"Ray",
""
]
] |
1601.06108 | Alexander Kott | Alexander Kott, Ray Budd, Larry Ground, Lakshmi Rebbapragada, John
Langston | Decision Aids for Adversarial Planning in Military Operations:
Algorithms, Tools, and Turing-test-like Experimental Validation | A version of this paper appeared in the Applied Intelligence journal | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Use of intelligent decision aids can help alleviate the challenges of
planning complex operations. We describe integrated algorithms, and a tool
capable of translating a high-level concept for a tactical military operation
into a fully detailed, actionable plan, producing automatically (or with human
guidance) plans with realistic degree of detail and of human-like quality.
Tight interleaving of several algorithms -- planning, adversary estimates,
scheduling, routing, attrition and consumption estimates -- comprise the
computational approach of this tool. Although originally developed for Army
large-unit operations, the technology is generic and also applies to a number
of other domains, particularly in critical situations requiring detailed
planning within a constrained period of time. In this paper, we focus
particularly on the engineering tradeoffs in the design of the tool. In an
experimental evaluation, reminiscent of the Turing test, the tool's performance
compared favorably with human planners.
| [
{
"version": "v1",
"created": "Fri, 22 Jan 2016 19:13:06 GMT"
}
] | 1,453,680,000,000 | [
[
"Kott",
"Alexander",
""
],
[
"Budd",
"Ray",
""
],
[
"Ground",
"Larry",
""
],
[
"Rebbapragada",
"Lakshmi",
""
],
[
"Langston",
"John",
""
]
] |
1601.06569 | Kareem Amin | Kareem Amin, Satinder Singh | Towards Resolving Unidentifiability in Inverse Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a setting for Inverse Reinforcement Learning (IRL) where the
learner is extended with the ability to actively select multiple environments,
observing an agent's behavior on each environment. We first demonstrate that if
the learner can experiment with any transition dynamics on some fixed set of
states and actions, then there exists an algorithm that reconstructs the
agent's reward function to the fullest extent theoretically possible, and that
requires only a small (logarithmic) number of experiments. We contrast this
result to what is known about IRL in single fixed environments, namely that the
true reward function is fundamentally unidentifiable. We then extend this
setting to the more realistic case where the learner may not select any
transition dynamic, but rather is restricted to some fixed set of environments
that it may try. We connect the problem of maximizing the information derived
from experiments to submodular function maximization and demonstrate that a
greedy algorithm is near optimal (up to logarithmic factors). Finally, we
empirically validate our algorithm on an environment inspired by behavioral
psychology.
| [
{
"version": "v1",
"created": "Mon, 25 Jan 2016 11:50:43 GMT"
}
] | 1,453,766,400,000 | [
[
"Amin",
"Kareem",
""
],
[
"Singh",
"Satinder",
""
]
] |
1601.06923 | Nevin L. Zhang | Chen Fu, Nevin L. Zhang, Bao Xin Chen, Zhou Rong Chen, Xiang Lan Jin,
Rong Juan Guo, Zhi Gang Chen, Yun Ling Zhang | Identification and classification of TCM syndrome types among patients
with vascular mild cognitive impairment using latent tree analysis | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: To treat patients with vascular mild cognitive impairment (VMCI)
using TCM, it is necessary to classify the patients into TCM syndrome types and
to apply different treatments to different types. We investigate how to
properly carry out the classification using a novel data-driven method known as
latent tree analysis.
Method: A cross-sectional survey on VMCI was carried out in several regions
in northern China from 2008 to 2011, which resulted in a data set that involves
803 patients and 93 symptoms. Latent tree analysis was performed on the data to
reveal symptom co-occurrence patterns, and the patients were partitioned into
clusters in multiple ways based on the patterns. The patient clusters were
matched up with syndrome types, and population statistics of the clusters are
used to quantify the syndrome types and to establish classification rules.
Results: Eight syndrome types are identified: Qi Deficiency, Qi Stagnation,
Blood Deficiency, Blood Stasis, Phlegm-Dampness, Fire-Heat, Yang Deficiency,
and Yin Deficiency. The prevalence and symptom occurrence characteristics of
each syndrome type are determined. Quantitative classification rules are
established for determining whether a patient belongs to each of the syndrome
types.
Conclusions: A solution for the TCM syndrome classification problem
associated with VMCI is established based on the latent tree analysis of
unlabeled symptom survey data. The results can be used as a reference in clinic
practice to improve the quality of syndrome differentiation and to reduce
diagnosis variances across physicians. They can also be used for patient
selection in research projects aimed at finding biomarkers for the syndrome
types and in randomized control trials aimed at determining the efficacy of TCM
treatments of VMCI.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 08:34:56 GMT"
},
{
"version": "v2",
"created": "Wed, 24 Feb 2016 16:04:24 GMT"
}
] | 1,456,358,400,000 | [
[
"Fu",
"Chen",
""
],
[
"Zhang",
"Nevin L.",
""
],
[
"Chen",
"Bao Xin",
""
],
[
"Chen",
"Zhou Rong",
""
],
[
"Jin",
"Xiang Lan",
""
],
[
"Guo",
"Rong Juan",
""
],
[
"Chen",
"Zhi Gang",
""
],
[
"Zhang",
"Yun Ling",
""
]
] |
1601.07065 | Ong Sing Goh | Ser Ling Lim, Ong Sing Goh | Intelligent Conversational Bot for Massive Online Open Courses (MOOCs) | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Massive Online Open Courses (MOOCs) which were introduced in 2008 has since
drawn attention around the world for both its advantages as well as criticism
on its drawbacks. One of the issues in MOOCs which is the lack of interactivity
with the instructor has brought conversational bot into the picture to fill in
this gap. In this study, a prototype of MOOCs conversational bot, MOOC-bot is
being developed and integrated into MOOCs website to respond to the learner
inquiries using text or speech input. MOOC-bot is using the popular Artificial
Intelligence Markup Language (AIML) to develop its knowledge base, leverage
from AIML capability to deliver appropriate responses and can be quickly
adapted to new knowledge domains. The system architecture of MOOC-bot consists
of knowledge base along with AIML interpreter, chat interface, MOOCs website
and Web Speech API to provide speech recognition and speech synthesis
capability. The initial MOOC-bot prototype has the general knowledge from the
past Loebner Prize winner - ALICE, frequent asked questions, and a content
offered by Universiti Teknikal Malaysia Melaka (UTeM). The evaluation of
MOOC-bot based on the past competition questions from Chatterbox Challenge
(CBC) and Loebner Prize has shown that it was able to provide correct answers
most of the time during the test and demonstrated the capability to prolong the
conversation. The advantages of MOOC-bot such as able to provide 24-hour
service that can serve different time zones, able to have knowledge in multiple
domains, and can be shared by multiple sites simultaneously have outweighed its
existing limitations.
| [
{
"version": "v1",
"created": "Tue, 26 Jan 2016 15:23:29 GMT"
}
] | 1,453,852,800,000 | [
[
"Lim",
"Ser Ling",
""
],
[
"Goh",
"Ong Sing",
""
]
] |
1601.07409 | Chi Mai Nguyen | Chi Mai Nguyen, Roberto Sebastiani, Paolo Giorgini, and John
Mylopoulos | Multi-Object Reasoning with Constrained Goal Models | 52 pages (with appendices). Under journal submission | null | 10.1007/s00766-016-0263-5 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Goal models have been widely used in Computer Science to represent software
requirements, business objectives, and design qualities. Existing goal
modelling techniques, however, have shown limitations of expressiveness and/or
tractability in coping with complex real-world problems. In this work, we
exploit advances in automated reasoning technologies, notably Satisfiability
and Optimization Modulo Theories (SMT/OMT), and we propose and formalize: (i)
an extended modelling language for goals, namely the Constrained Goal Model
(CGM), which makes explicit the notion of goal refinement and of domain
assumption, allows for expressing preferences between goals and refinements,
and allows for associating numerical attributes to goals and refinements for
defining constraints and optimization goals over multiple objective functions,
refinements and their numerical attributes; (ii) a novel set of automated
reasoning functionalities over CGMs, allowing for automatically generating
suitable refinements of input CGMs, under user-specified assumptions and
constraints, that also maximize preferences and optimize given objective
functions. We have implemented these modelling and reasoning functionalities in
a tool, named CGM-Tool, using the OMT solver OptiMathSAT as automated reasoning
backend. Moreover, we have conducted an experimental evaluation on large CGMs
to support the claim that our proposal scales well for goal models with
thousands of elements.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 15:36:30 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Nov 2016 18:03:54 GMT"
}
] | 1,481,241,600,000 | [
[
"Nguyen",
"Chi Mai",
""
],
[
"Sebastiani",
"Roberto",
""
],
[
"Giorgini",
"Paolo",
""
],
[
"Mylopoulos",
"John",
""
]
] |
1601.07483 | Shashank Shekhar | Shashank Shekhar and Deepak Khemani | Learning and Tuning Meta-heuristics in Plan Space Planning | AAAI format, (9 pages), (1 figure), (4 tables) | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, the planning community has observed that techniques for
learning heuristic functions have yielded improvements in performance. One
approach is to use offline learning to learn predictive models from existing
heuristics in a domain dependent manner. These learned models are deployed as
new heuristic functions. The learned models can in turn be tuned online using a
domain independent error correction approach to further enhance their
informativeness. The online tuning approach is domain independent but instance
specific, and contributes to improved performance for individual instances as
planning proceeds. Consequently it is more effective in larger problems.
In this paper, we mention two approaches applicable in Partial Order Causal
Link (POCL) Planning that is also known as Plan Space Planning. First, we
endeavor to enhance the performance of a POCL planner by giving an algorithm
for supervised learning. Second, we then discuss an online error minimization
approach in POCL framework to minimize the step-error associated with the
offline learned models thus enhancing their informativeness. Our evaluation
shows that the learning approaches scale up the performance of the planner over
standard benchmarks, specially for larger problems.
| [
{
"version": "v1",
"created": "Wed, 27 Jan 2016 18:23:24 GMT"
},
{
"version": "v2",
"created": "Thu, 28 Jan 2016 09:26:10 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Apr 2016 15:03:37 GMT"
}
] | 1,461,628,800,000 | [
[
"Shekhar",
"Shashank",
""
],
[
"Khemani",
"Deepak",
""
]
] |
1601.07929 | Martin Plajner | Martin Plajner and Ji\v{r}\'i Vomlel | Probabilistic Models for Computerized Adaptive Testing: Experiments | 9 pages, v2: language corrections | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper follows previous research we have already performed in the area of
Bayesian networks models for CAT. We present models using Item Response Theory
(IRT - standard CAT method), Bayesian networks, and neural networks. We
conducted simulated CAT tests on empirical data. Results of these tests are
presented for each model separately and compared.
| [
{
"version": "v1",
"created": "Thu, 28 Jan 2016 22:03:32 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Feb 2016 06:36:09 GMT"
}
] | 1,454,371,200,000 | [
[
"Plajner",
"Martin",
""
],
[
"Vomlel",
"Jiří",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.