id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1602.00198 | Chuyu Xiong | Chuyu Xiong | Discussion on Mechanical Learning and Learning Machine | 11 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanical learning is a computing system that is based on a set of simple
and fixed rules, and can learn from incoming data. A learning machine is a
system that realizes mechanical learning. Importantly, we emphasis that it is
based on a set of simple and fixed rules, contrasting to often called machine
learning that is sophisticated software based on very complicated mathematical
theory, and often needs human intervene for software fine tune and manual
adjustments. Here, we discuss some basic facts and principles of such system,
and try to lay down a framework for further study. We propose 2 directions to
approach mechanical learning, just like Church-Turing pair: one is trying to
realize a learning machine, another is trying to well describe the mechanical
learning.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 04:05:50 GMT"
}
] | 1,454,371,200,000 | [
[
"Xiong",
"Chuyu",
""
]
] |
1602.00269 | Sunil Mandhan | Sarath P R, Sunil Mandhan, Yoshiki Niwa | Numerical Atrribute Extraction from Clinical Texts | 6 Pages | null | 10.13140/RG.2.1.4763.3365 | Submission 42, CLEF 2015 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes about information extraction system, which is an
extension of the system developed by team Hitachi for "Disease/Disorder
Template filling" task organized by ShARe/CLEF eHealth Evolution Lab 2014. In
this extension module we focus on extraction of numerical attributes and values
from discharge summary records and associating correct relation between
attributes and values. We solve the problem in two steps. First step is
extraction of numerical attributes and values, which is developed as a Named
Entity Recognition (NER) model using Stanford NLP libraries. Second step is
correctly associating the attributes to values, which is developed as a
relation extraction module in Apache cTAKES framework. We integrated Stanford
NER model as cTAKES pipeline component and used in relation extraction module.
Conditional Random Field (CRF) algorithm is used for NER and Support Vector
Machines (SVM) for relation extraction. For attribute value relation
extraction, we observe 95% accuracy using NER alone and combined accuracy of
87% with NER and SVM.
| [
{
"version": "v1",
"created": "Sun, 31 Jan 2016 15:58:51 GMT"
}
] | 1,454,371,200,000 | [
[
"R",
"Sarath P",
""
],
[
"Mandhan",
"Sunil",
""
],
[
"Niwa",
"Yoshiki",
""
]
] |
1602.01059 | Nicolas Maudet | Elise Bonzon (LIPADE), J\'er\^ome Delobelle (CRIL), S\'ebastien
Konieczny (CRIL), Nicolas Maudet (LIP6) | A Comparative Study of Ranking-based Semantics for Abstract
Argumentation | Proceedings of the 30th AAAI Conference on Artificial Intelligence
(AAAI-2016), Feb 2016, Phoenix, United States | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Argumentation is a process of evaluating and comparing a set of arguments. A
way to compare them consists in using a ranking-based semantics which
rank-order arguments from the most to the least acceptable ones. Recently, a
number of such semantics have been proposed independently, often associated
with some desirable properties. However, there is no comparative study which
takes a broader perspective. This is what we propose in this work. We provide a
general comparison of all these semantics with respect to the proposed
properties. That allows to underline the differences of behavior between the
existing semantics.
| [
{
"version": "v1",
"created": "Tue, 2 Feb 2016 19:49:03 GMT"
}
] | 1,454,457,600,000 | [
[
"Bonzon",
"Elise",
"",
"LIPADE"
],
[
"Delobelle",
"Jérôme",
"",
"CRIL"
],
[
"Konieczny",
"Sébastien",
"",
"CRIL"
],
[
"Maudet",
"Nicolas",
"",
"LIP6"
]
] |
1602.01398 | Usman Habib Usman Habib | Usman Habib, Gerhard Zucker | Finding the different patterns in buildings data using bag of words
representation with clustering | null | null | 10.1109/FIT.2015.60 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The understanding of the buildings operation has become a challenging task
due to the large amount of data recorded in energy efficient buildings. Still,
today the experts use visual tools for analyzing the data. In order to make the
task realistic, a method has been proposed in this paper to automatically
detect the different patterns in buildings. The K Means clustering is used to
automatically identify the ON (operational) cycles of the chiller. In the next
step the ON cycles are transformed to symbolic representation by using Symbolic
Aggregate Approximation (SAX) method. Then the SAX symbols are converted to bag
of words representation for hierarchical clustering. Moreover, the proposed
technique is applied to real life data of adsorption chiller. Additionally, the
results from the proposed method and dynamic time warping (DTW) approach are
also discussed and compared.
| [
{
"version": "v1",
"created": "Wed, 3 Feb 2016 18:11:32 GMT"
}
] | 1,457,049,600,000 | [
[
"Habib",
"Usman",
""
],
[
"Zucker",
"Gerhard",
""
]
] |
1602.01628 | Dmytro Terletskyi | D. A. Terletskyi, A. I. Provotar | Fuzzy Object-Oriented Dynamic Networks. II | 2 figures | Cybernetics and Systems Analysis, 2016, Volume 52, Issue 1, pp
38-45 | 10.1007/s10559-016-9797-2 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article generalizes object-oriented dynamic networks to the fuzzy case,
which allows one to represent knowledge on objects and classes of objects that
are fuzzy by nature and also to model their changes in time. Within the
framework of the approach described, a mechanism is proposed that makes it
possible to acquire new knowledge on the basis of basic knowledge and
considerably differs from well-known methods used in existing models of
knowledge representation. The approach is illustrated by an example of
construction of a concrete fuzzy object-oriented dynamic network.
| [
{
"version": "v1",
"created": "Thu, 4 Feb 2016 10:50:13 GMT"
},
{
"version": "v2",
"created": "Tue, 16 Feb 2016 19:25:12 GMT"
}
] | 1,455,667,200,000 | [
[
"Terletskyi",
"D. A.",
""
],
[
"Provotar",
"A. I.",
""
]
] |
1602.01971 | Erik Andresen | Erik Andresen, David Haensel, Mohcine Chraibi, and Armin Seyfried | Wayfinding and cognitive maps for pedestrian models | 8 pages, 3 figures, TGF'15 Conference, 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Usually, routing models in pedestrian dynamics assume that agents have
fulfilled and global knowledge about the building's structure. However, they
neglect the fact that pedestrians possess no or only parts of information about
their position relative to final exits and possible routes leading to them. To
get a more realistic description we introduce the systematics of gathering and
using spatial knowledge. A new wayfinding model for pedestrian dynamics is
proposed. The model defines for every pedestrian an individual knowledge
representation implying inaccuracies and uncertainties. In addition,
knowledge-driven search strategies are introduced. The presented concept is
tested on a fictive example scenario.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 10:25:15 GMT"
}
] | 1,454,889,600,000 | [
[
"Andresen",
"Erik",
""
],
[
"Haensel",
"David",
""
],
[
"Chraibi",
"Mohcine",
""
],
[
"Seyfried",
"Armin",
""
]
] |
1602.02169 | Mauricio Toro | Mauricio Toro | Probabilistic Extension to the Concurrent Constraint Factor Oracle Model
for Music Improvisation | 70 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We can program a Real-Time (RT) music improvisation system in C++ without a
formal semantic or we can model it with process calculi such as the
Non-deterministic Timed Concurrent Constraint (ntcc) calculus. "A Concurrent
Constraints Factor Oracle (FO) model for Music Improvisation" (Ccfomi) is an
improvisation model specified on ntcc. Since Ccfomi improvises
non-deterministically, there is no control on choices and therefore little
control over the sequence variation during the improvisation. To avoid this, we
extended Ccfomi using the Probabilistic Non-deterministic Timed Concurrent
Constraint calculus. Our extension to Ccfomi does not change the time and space
complexity of building the FO, thus making our extension compatible with RT.
However, there was not a ntcc interpreter capable of RT to execute Ccfomi. We
developed Ntccrt --a RT capable interpreter for ntcc-- and we executed Ccfomi
on Ntccrt. In the future, we plan to extend Ntccrt to execute our extension to
Ccfomi.
| [
{
"version": "v1",
"created": "Fri, 5 Feb 2016 21:26:53 GMT"
}
] | 1,454,976,000,000 | [
[
"Toro",
"Mauricio",
""
]
] |
1602.02261 | Rodrigo Nogueira | Rodrigo Nogueira and Kyunghyun Cho | End-to-End Goal-Driven Web Navigation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a goal-driven web navigation as a benchmark task for evaluating an
agent with abilities to understand natural language and plan on partially
observed environments. In this challenging task, an agent navigates through a
website, which is represented as a graph consisting of web pages as nodes and
hyperlinks as directed edges, to find a web page in which a query appears. The
agent is required to have sophisticated high-level reasoning based on natural
languages and efficient sequential decision-making capability to succeed. We
release a software tool, called WebNav, that automatically transforms a website
into this goal-driven web navigation task, and as an example, we make WikiNav,
a dataset constructed from the English Wikipedia. We extensively evaluate
different variants of neural net based artificial agents on WikiNav and observe
that the proposed goal-driven web navigation well reflects the advances in
models, making it a suitable benchmark for evaluating future progress.
Furthermore, we extend the WikiNav with question-answer pairs from Jeopardy!
and test the proposed agent based on recurrent neural networks against strong
inverted index based search engines. The artificial agents trained on WikiNav
outperforms the engined based approaches, demonstrating the capability of the
proposed goal-driven navigation as a good proxy for measuring the progress in
real-world tasks such as focused crawling and question-answering.
| [
{
"version": "v1",
"created": "Sat, 6 Feb 2016 14:53:02 GMT"
},
{
"version": "v2",
"created": "Fri, 20 May 2016 16:26:58 GMT"
}
] | 1,463,961,600,000 | [
[
"Nogueira",
"Rodrigo",
""
],
[
"Cho",
"Kyunghyun",
""
]
] |
1602.02617 | Arnaud Martin | Zhun-Ga Liu, Quan Pan, Jean Dezert (Palaiseau), Arnaud Martin (DRUID) | Adaptive imputation of missing values for incomplete pattern
classification | null | Pattern Recognition, Elsevier, 2016, 52 | 10.1016/j.patcog.2015.10.001 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In classification of incomplete pattern, the missing values can either play a
crucial role in the class determination, or have only little influence (or
eventually none) on the classification results according to the context. We
propose a credal classification method for incomplete pattern with adaptive
imputation of missing values based on belief function theory. At first, we try
to classify the object (incomplete pattern) based only on the available
attribute values. As underlying principle, we assume that the missing
information is not crucial for the classification if a specific class for the
object can be found using only the available information. In this case, the
object is committed to this particular class. However, if the object cannot be
classified without ambiguity, it means that the missing values play a main role
for achieving an accurate classification. In this case, the missing values will
be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM)
techniques, and the edited pattern with the imputation is then classified. The
(original or edited) pattern is respectively classified according to each
training class, and the classification results represented by basic belief
assignments are fused with proper combination rules for making the credal
classification. The object is allowed to belong with different masses of belief
to the specific classes and meta-classes (which are particular disjunctions of
several single classes). The credal classification captures well the
uncertainty and imprecision of classification, and reduces effectively the rate
of misclassifications thanks to the introduction of meta-classes. The
effectiveness of the proposed method with respect to other classical methods is
demonstrated based on several experiments using artificial and real data sets.
| [
{
"version": "v1",
"created": "Mon, 8 Feb 2016 15:52:08 GMT"
}
] | 1,454,976,000,000 | [
[
"Liu",
"Zhun-Ga",
"",
"Palaiseau"
],
[
"Pan",
"Quan",
"",
"Palaiseau"
],
[
"Dezert",
"Jean",
"",
"Palaiseau"
],
[
"Martin",
"Arnaud",
"",
"DRUID"
]
] |
1602.03203 | Szymon Sidor | Szymon Sidor, Peng Yu, Cheng Fang, Brian Williams | Time Resource Networks | 7 pages, submitted for review to IJCAI16 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The problem of scheduling under resource constraints is widely applicable.
One prominent example is power management, in which we have a limited
continuous supply of power but must schedule a number of power-consuming tasks.
Such problems feature tightly coupled continuous resource constraints and
continuous temporal constraints.
We address such problems by introducing the Time Resource Network (TRN), an
encoding for resource-constrained scheduling problems. The definition allows
temporal specifications using a general family of representations derived from
the Simple Temporal network, including the Simple Temporal Network with
Uncertainty, and the probabilistic Simple Temporal Network (Fang et al.
(2014)).
We propose two algorithms for determining the consistency of a TRN: one based
on Mixed Integer Programing and the other one based on Constraint Programming,
which we evaluate on scheduling problems with Simple Temporal Constraints and
Probabilistic Temporal Constraints.
| [
{
"version": "v1",
"created": "Tue, 9 Feb 2016 21:49:16 GMT"
}
] | 1,455,148,800,000 | [
[
"Sidor",
"Szymon",
""
],
[
"Yu",
"Peng",
""
],
[
"Fang",
"Cheng",
""
],
[
"Williams",
"Brian",
""
]
] |
1602.03963 | Easton Li Xu | Easton Li Xu, Xiaoning Qian, Tie Liu, Shuguang Cui | Detection of Cooperative Interactions in Logistic Regression Models | 15 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important problem in the field of bioinformatics is to identify
interactive effects among profiled variables for outcome prediction. In this
paper, a logistic regression model with pairwise interactions among a set of
binary covariates is considered. Modeling the structure of the interactions by
a graph, our goal is to recover the interaction graph from independently
identically distributed (i.i.d.) samples of the covariates and the outcome.
When viewed as a feature selection problem, a simple quantity called
influence is proposed as a measure of the marginal effects of the interaction
terms on the outcome. For the case when the underlying interaction graph is
known to be acyclic, it is shown that a simple algorithm that is based on a
maximum-weight spanning tree with respect to the plug-in estimates of the
influences not only has strong theoretical performance guarantees, but can also
outperform generic feature selection algorithms for recovering the interaction
graph from i.i.d. samples of the covariates and the outcome. Our results can
also be extended to the model that includes both individual effects and
pairwise interactions via the help of an auxiliary covariate.
| [
{
"version": "v1",
"created": "Fri, 12 Feb 2016 05:04:21 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Dec 2016 02:11:04 GMT"
}
] | 1,483,056,000,000 | [
[
"Xu",
"Easton Li",
""
],
[
"Qian",
"Xiaoning",
""
],
[
"Liu",
"Tie",
""
],
[
"Cui",
"Shuguang",
""
]
] |
1602.04376 | Fahad Muhammad | Muhammad Fahad | BPCMont: Business Process Change Management Ontology | 5 pages, 7 Figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Change management for evolving collaborative business process development is
crucial when the business logic, transections and workflow change due to
changes in business strategies or organizational and technical environment.
During the change implementation, business processes are analyzed and improved
ensuring that they capture the proposed change and they do not contain any
undesired functionalities or change side-effects. This paper presents Business
Process Change Management approach for the efficient and effective
implementation of change in the business process. The key technology behind our
approach is our proposed Business Process Change Management Ontology (BPCMont)
which is the main contribution of this paper. BPCMont, as a formalized change
specification, helps to revert BP into a consistent state in case of system
crash, intermediate conflicting stage or unauthorized change done, aid in
change traceability in the new and old versions of business processes, change
effects can be seen and estimated effectively, ease for Stakeholders to
validate and verify change implementation, etc.
| [
{
"version": "v1",
"created": "Sat, 13 Feb 2016 20:27:44 GMT"
}
] | 1,455,580,800,000 | [
[
"Fahad",
"Muhammad",
""
]
] |
1602.04498 | Andrew Bate | Andrew Bate, Boris Motik, Bernardo Cuenca Grau, Franti\v{s}ek
Siman\v{c}\'ik, Ian Horrocks | Extending Consequence-Based Reasoning to SRIQ | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Consequence-based calculi are a family of reasoning algorithms for
description logics (DLs), and they combine hypertableau and resolution in a way
that often achieves excellent performance in practice. Up to now, however, they
were proposed for either Horn DLs (which do not support disjunction), or for
DLs without counting quantifiers. In this paper we present a novel
consequence-based calculus for SRIQ---a rich DL that supports both features.
This extension is non-trivial since the intermediate consequences that need to
be derived during reasoning cannot be captured using DLs themselves. The
results of our preliminary performance evaluation suggest the feasibility of
our approach in practice.
| [
{
"version": "v1",
"created": "Sun, 14 Feb 2016 19:56:18 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Feb 2016 16:04:55 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Feb 2016 21:17:27 GMT"
}
] | 1,456,358,400,000 | [
[
"Bate",
"Andrew",
""
],
[
"Motik",
"Boris",
""
],
[
"Grau",
"Bernardo Cuenca",
""
],
[
"Simančík",
"František",
""
],
[
"Horrocks",
"Ian",
""
]
] |
1602.04613 | Semeh Ben Salem | Sami Naouali, Semeh Ben Salem | Towards reducing the multidimensionality of OLAP cubes using the
Evolutionary Algorithms and Factor Analysis Methods | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Data Warehouses are structures with large amount of data collected from
heterogeneous sources to be used in a decision support system. Data Warehouses
analysis identifies hidden patterns initially unexpected which analysis
requires great memory and computation cost. Data reduction methods were
proposed to make this analysis easier. In this paper, we present a hybrid
approach based on Genetic Algorithms (GA) as Evolutionary Algorithms and the
Multiple Correspondence Analysis (MCA) as Analysis Factor Methods to conduct
this reduction. Our approach identifies reduced subset of dimensions from the
initial subset p where p'<p where it is proposed to find the profile fact that
is the closest to reference. GAs identify the possible subsets and the Khi
formula of the ACM evaluates the quality of each subset. The study is based on
a distance measurement between the reference and n facts profile extracted from
the Warehouses.
| [
{
"version": "v1",
"created": "Mon, 15 Feb 2016 10:23:12 GMT"
}
] | 1,455,580,800,000 | [
[
"Naouali",
"Sami",
""
],
[
"Salem",
"Semeh Ben",
""
]
] |
1602.04875 | Min Chen | Min Chen and Emilio Frazzoli and David Hsu and Wee Sun Lee | POMDP-lite for Robust Robot Planning under Uncertainty | In Proc. IEEE International Conference on Robotics & Automation
(ICRA) 2016, with supplementary materials | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The partially observable Markov decision process (POMDP) provides a
principled general model for planning under uncertainty. However, solving a
general POMDP is computationally intractable in the worst case. This paper
introduces POMDP-lite, a subclass of POMDPs in which the hidden state variables
are constant or only change deterministically. We show that a POMDP-lite is
equivalent to a set of fully observable Markov decision processes indexed by a
hidden parameter and is useful for modeling a variety of interesting robotic
tasks. We develop a simple model-based Bayesian reinforcement learning
algorithm to solve POMDP-lite models. The algorithm performs well on
large-scale POMDP-lite models with up to $10^{20}$ states and outperforms the
state-of-the-art general-purpose POMDP algorithms. We further show that the
algorithm is near-Bayesian-optimal under suitable conditions.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 00:47:08 GMT"
},
{
"version": "v2",
"created": "Thu, 18 Feb 2016 03:18:30 GMT"
},
{
"version": "v3",
"created": "Tue, 23 Feb 2016 06:44:24 GMT"
}
] | 1,456,272,000,000 | [
[
"Chen",
"Min",
""
],
[
"Frazzoli",
"Emilio",
""
],
[
"Hsu",
"David",
""
],
[
"Lee",
"Wee Sun",
""
]
] |
1602.04936 | Harshit Sethy | Harshit Sethy, Amit Patel | Reinforcement Learning approach for Real Time Strategy Games Battle city
and S3 | 13 pages, vol 9 issue 4 of IJIP | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper we proposed reinforcement learning algorithms with the
generalized reward function. In our proposed method we use Q-learning and SARSA
algorithms with generalised reward function to train the reinforcement learning
agent. We evaluated the performance of our proposed algorithms on two real-time
strategy games called BattleCity and S3. There are two main advantages of
having such an approach as compared to other works in RTS. (1) We can ignore
the concept of a simulator which is often game specific and is usually hard
coded in any type of RTS games (2) our system can learn from interaction with
any opponents and quickly change the strategy according to the opponents and do
not need any human traces as used in previous works. Keywords : Reinforcement
learning, Machine learning, Real time strategy, Artificial intelligence.
| [
{
"version": "v1",
"created": "Tue, 16 Feb 2016 08:17:17 GMT"
}
] | 1,455,667,200,000 | [
[
"Sethy",
"Harshit",
""
],
[
"Patel",
"Amit",
""
]
] |
1602.05404 | Jos Uiterwijk | Jos W.H.M. Uiterwijk | 11 x 11 Domineering is Solved: The first player wins | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have developed a program called MUDoS (Maastricht University Domineering
Solver) that solves Domineering positions in a very efficient way. This enables
the solution of known positions so far (up to the 10 x 10 board) much quicker
(measured in number of investigated nodes).
More importantly, it enables the solution of the 11 x 11 Domineering board, a
board up till now far out of reach of previous Domineering solvers. The
solution needed the investigation of 259,689,994,008 nodes, using almost half a
year of computation time on a single simple desktop computer. The results show
that under optimal play the first player wins the 11 x 11 Domineering game,
irrespective if Vertical or Horizontal starts the game.
In addition, several other boards hitherto unsolved were solved. Using the
convention that Vertical starts, the 8 x 15, 11 x 9, 12 x 8, 12 x 15, 14 x 8,
and 17 x 6 boards are all won by Vertical, whereas the 6 x 17, 8 x 12, 9 x 11,
and 11 x 10 boards are all won by Horizontal.
| [
{
"version": "v1",
"created": "Wed, 17 Feb 2016 13:34:04 GMT"
}
] | 1,455,753,600,000 | [
[
"Uiterwijk",
"Jos W. H. M.",
""
]
] |
1602.05699 | Heng Zhang | Hai Wan, Heng Zhang, Peng Xiao, Haoran Huang, Yan Zhang | Query Answering with Inconsistent Existential Rules under Stable Model
Semantics | Accepted by AAAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional inconsistency-tolerent query answering in ontology-based data
access relies on selecting maximal components of an ABox/database which are
consistent with the ontology. However, some rules in ontologies might be
unreliable if they are extracted from ontology learning or written by
unskillful knowledge engineers. In this paper we present a framework of
handling inconsistent existential rules under stable model semantics, which is
defined by a notion called rule repairs to select maximal components of the
existential rules. Surprisingly, for R-acyclic existential rules with
R-stratified or guarded existential rules with stratified negations, both the
data complexity and combined complexity of query answering under the rule
{repair semantics} remain the same as that under the conventional query
answering semantics. This leads us to propose several approaches to handle the
rule {repair semantics} by calling answer set programming solvers. An
experimental evaluation shows that these approaches have good scalability of
query answering under rule repairs on realistic cases.
| [
{
"version": "v1",
"created": "Thu, 18 Feb 2016 07:23:28 GMT"
}
] | 1,455,840,000,000 | [
[
"Wan",
"Hai",
""
],
[
"Zhang",
"Heng",
""
],
[
"Xiao",
"Peng",
""
],
[
"Huang",
"Haoran",
""
],
[
"Zhang",
"Yan",
""
]
] |
1602.05705 | Jonathan Nix | Jonathan Darren Nix | A theory of contemplation | 18 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper you can explore the application of some notable Boolean-derived
methods, namely the Disjunctive Normal Form representation of logic table
expansions, and extend them to a real-valued logic model which is able to
utilize quantities on the range [0,1], [-1,1], [a,b], (x,y), (x,y,z), and etc.
so as to produce a logical programming of arbitrary range, precision, and
dimensionality, thereby enabling contemplation at a logical level in notions of
arbitrary data, colors, and spatial constructs, with an example of the
production of a game character's logic in mathematical form.
| [
{
"version": "v1",
"created": "Thu, 18 Feb 2016 07:42:00 GMT"
},
{
"version": "v2",
"created": "Sun, 28 Oct 2018 01:05:14 GMT"
},
{
"version": "v3",
"created": "Fri, 27 Sep 2019 03:12:55 GMT"
},
{
"version": "v4",
"created": "Sun, 20 Oct 2019 17:51:11 GMT"
},
{
"version": "v5",
"created": "Tue, 22 Oct 2019 23:22:39 GMT"
},
{
"version": "v6",
"created": "Wed, 30 Oct 2019 18:56:58 GMT"
},
{
"version": "v7",
"created": "Tue, 5 Nov 2019 19:57:31 GMT"
},
{
"version": "v8",
"created": "Fri, 8 Nov 2019 17:36:46 GMT"
}
] | 1,573,430,400,000 | [
[
"Nix",
"Jonathan Darren",
""
]
] |
1602.05828 | Zied Bouraoui | Jean Francois Baget, Salem Benferhat, Zied Bouraoui, Madalina
Croitoru, Marie-Laure Mugnier, Odile Papini, Swan Rocher, Karim Tabia | A General Modifier-based Framework for Inconsistency-Tolerant Query
Answering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a general framework for inconsistency-tolerant query answering
within existential rule setting. This framework unifies the main semantics
proposed by the state of art and introduces new ones based on cardinality and
majority principles. It relies on two key notions: modifiers and inference
strategies. An inconsistency-tolerant semantics is seen as a composite modifier
plus an inference strategy. We compare the obtained semantics from a
productivity point of view.
| [
{
"version": "v1",
"created": "Thu, 18 Feb 2016 15:13:00 GMT"
}
] | 1,455,840,000,000 | [
[
"Baget",
"Jean Francois",
""
],
[
"Benferhat",
"Salem",
""
],
[
"Bouraoui",
"Zied",
""
],
[
"Croitoru",
"Madalina",
""
],
[
"Mugnier",
"Marie-Laure",
""
],
[
"Papini",
"Odile",
""
],
[
"Rocher",
"Swan",
""
],
[
"Tabia",
"Karim",
""
]
] |
1602.06462 | Toby Walsh | Toby Walsh | The Singularity May Never Be Near | Under review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is both much optimism and pessimism around artificial intelligence (AI)
today. The optimists are investing millions of dollars, and even in some cases
billions of dollars into AI. The pessimists, on the other hand, predict that AI
will end many things: jobs, warfare, and even the human race. Both the
optimists and the pessimists often appeal to the idea of a technological
singularity, a point in time where machine intelligence starts to run away, and
a new, more intelligent species starts to inhabit the earth. If the optimists
are right, this will be a moment that fundamentally changes our economy and our
society. If the pessimists are right, this will be a moment that also
fundamentally changes our economy and our society. It is therefore very
worthwhile spending some time deciding if either of them might be right.
| [
{
"version": "v1",
"created": "Sat, 20 Feb 2016 21:09:07 GMT"
}
] | 1,456,185,600,000 | [
[
"Walsh",
"Toby",
""
]
] |
1602.06484 | Mark Riedl | Mark O. Riedl | Computational Narrative Intelligence: A Human-Centered Goal for
Artificial Intelligence | 5 pages, published in the CHI 2016 Workshop on Human-Centered Machine
Learning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Narrative intelligence is the ability to craft, tell, understand, and respond
affectively to stories. We argue that instilling artificial intelligences with
computational narrative intelligence affords a number of applications
beneficial to humans. We lay out some of the machine learning challenges
necessary to solve to achieve computational narrative intelligence. Finally, we
argue that computational narrative is a practical step towards machine
enculturation, the teaching of sociocultural values to machines.
| [
{
"version": "v1",
"created": "Sun, 21 Feb 2016 01:59:09 GMT"
}
] | 1,456,185,600,000 | [
[
"Riedl",
"Mark O.",
""
]
] |
1602.07565 | Petr Novotn\'y | Tom\'a\v{s} Br\'azdil, Krishnendu Chatterjee, Martin Chmel\'ik, Anchit
Gupta, Petr Novotn\'y | Stochastic Shortest Path with Energy Constraints in POMDPs | Technical report accompanying a paper published in proceedings of
AAMAS 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider partially observable Markov decision processes (POMDPs) with a
set of target states and positive integer costs associated with every
transition. The traditional optimization objective (stochastic shortest path)
asks to minimize the expected total cost until the target set is reached. We
extend the traditional framework of POMDPs to model energy consumption, which
represents a hard constraint. The energy levels may increase and decrease with
transitions, and the hard constraint requires that the energy level must remain
positive in all steps till the target is reached. First, we present a novel
algorithm for solving POMDPs with energy levels, developing on existing POMDP
solvers and using RTDP as its main method. Our second contribution is related
to policy representation. For larger POMDP instances the policies computed by
existing solvers are too large to be understandable. We present an automated
procedure based on machine learning techniques that automatically extracts
important decisions of the policy allowing us to compute succinct human
readable policies. Finally, we show experimentally that our algorithm performs
well and computes succinct policies on a number of POMDP instances from the
literature that were naturally enhanced with energy levels.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 15:41:22 GMT"
},
{
"version": "v2",
"created": "Wed, 11 May 2016 16:26:20 GMT"
}
] | 1,463,011,200,000 | [
[
"Brázdil",
"Tomáš",
""
],
[
"Chatterjee",
"Krishnendu",
""
],
[
"Chmelík",
"Martin",
""
],
[
"Gupta",
"Anchit",
""
],
[
"Novotný",
"Petr",
""
]
] |
1602.07566 | Andrea Burattin | Mirko Polato, Alessandro Sperduti, Andrea Burattin, Massimiliano de
Leoni | Time and Activity Sequence Prediction of Business Process Instances | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ability to know in advance the trend of running process instances, with
respect to different features, such as the expected completion time, would
allow business managers to timely counteract to undesired situations, in order
to prevent losses. Therefore, the ability to accurately predict future features
of running business process instances would be a very helpful aid when managing
processes, especially under service level agreement constraints. However,
making such accurate forecasts is not easy: many factors may influence the
predicted features.
Many approaches have been proposed to cope with this problem but all of them
assume that the underling process is stationary. However, in real cases this
assumption is not always true. In this work we present new methods for
predicting the remaining time of running cases. In particular we propose a
method, assuming process stationarity, which outperforms the state-of-the-art
and two other methods which are able to make predictions even with
non-stationary processes. We also describe an approach able to predict the full
sequence of activities that a running case is going to take. All these methods
are extensively evaluated on two real case studies.
| [
{
"version": "v1",
"created": "Wed, 24 Feb 2016 15:42:06 GMT"
}
] | 1,456,358,400,000 | [
[
"Polato",
"Mirko",
""
],
[
"Sperduti",
"Alessandro",
""
],
[
"Burattin",
"Andrea",
""
],
[
"de Leoni",
"Massimiliano",
""
]
] |
1602.07721 | Matthew Guzdial | Matthew Guzdial and Mark Riedl | Toward Game Level Generation from Gameplay Videos | 8 pages, 10 figures, Procedural Content Generation Workshop 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Algorithms that generate computer game content require game design knowledge.
We present an approach to automatically learn game design knowledge for level
design from gameplay videos. We further demonstrate how the acquired design
knowledge can be used to generate sections of game levels. Our approach
involves parsing video of people playing a game to detect the appearance of
patterns of sprites and utilizing machine learning to build a probabilistic
model of sprite placement. We show how rich game design information can be
automatically parsed from gameplay videos and represented as a set of
generative probabilistic models. We use Super Mario Bros. as a proof of
concept. We evaluate our approach on a measure of playability and stylistic
similarity to the original levels as represented in the gameplay videos.
| [
{
"version": "v1",
"created": "Tue, 23 Feb 2016 02:38:16 GMT"
}
] | 1,456,444,800,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Riedl",
"Mark",
""
]
] |
1602.07970 | Antti Hyttinen | Antti Hyttinen, Sergey Plis, Matti J\"arvisalo, Frederick Eberhardt,
David Danks | Causal Discovery from Subsampled Time Series Data by Constraint
Optimization | International Conference on Probabilistic Graphical Models, PGM 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper focuses on causal structure estimation from time series data in
which measurements are obtained at a coarser timescale than the causal
timescale of the underlying system. Previous work has shown that such
subsampling can lead to significant errors about the system's causal structure
if not properly taken into account. In this paper, we first consider the search
for the system timescale causal structures that correspond to a given
measurement timescale structure. We provide a constraint satisfaction procedure
whose computational performance is several orders of magnitude better than
previous approaches. We then consider finite-sample data as input, and propose
the first constraint optimization approach for recovering the system timescale
causal structure. This algorithm optimally recovers from possible conflicts due
to statistical errors. More generally, these advances allow for a robust and
non-parametric estimation of system timescale causal structures from subsampled
time series data.
| [
{
"version": "v1",
"created": "Thu, 25 Feb 2016 15:52:33 GMT"
},
{
"version": "v2",
"created": "Wed, 13 Jul 2016 08:11:35 GMT"
}
] | 1,468,454,400,000 | [
[
"Hyttinen",
"Antti",
""
],
[
"Plis",
"Sergey",
""
],
[
"Järvisalo",
"Matti",
""
],
[
"Eberhardt",
"Frederick",
""
],
[
"Danks",
"David",
""
]
] |
1602.08447 | Le Hoang Son | Mumtaz Ali, Nguyen Van Minh, Le Hoang Son | A Neutrosophic Recommender System for Medical Diagnosis Based on
Algebraic Neutrosophic Measures | Keywords: Medical diagnosis, neutrosophic set, neutrosophic
recommender system, non-linear regression model | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neutrosophic set has the ability to handle uncertain, incomplete,
inconsistent, indeterminate information in a more accurate way. In this paper,
we proposed a neutrosophic recommender system to predict the diseases based on
neutrosophic set which includes single-criterion neutrosophic recommender
system (SC-NRS) and multi-criterion neutrosophic recommender system (MC-NRS).
Further, we investigated some algebraic operations of neutrosophic recommender
system such as union, complement, intersection, probabilistic sum, bold sum,
bold intersection, bounded difference, symmetric difference, convex linear sum
of min and max operators, Cartesian product, associativity, commutativity and
distributive. Based on these operations, we studied the algebraic structures
such as lattices, Kleen algebra, de Morgan algebra, Brouwerian algebra, BCK
algebra, Stone algebra and MV algebra. In addition, we introduced several types
of similarity measures based on these algebraic operations and studied some of
their theoretic properties. Moreover, we accomplished a prediction formula
using the proposed algebraic similarity measure. We also proposed a new
algorithm for medical diagnosis based on neutrosophic recommender system.
Finally to check the validity of the proposed methodology, we made experiments
on the datasets Heart, RHC, Breast cancer, Diabetes and DMD. At the end, we
presented the MSE and computational time by comparing the proposed algorithm
with the relevant ones such as ICSM, DSM, CARE, CFMD, as well as other variants
namely Variant 67, Variant 69, and Varian 71 both in tabular and graphical form
to analyze the efficiency and accuracy. Finally we analyzed the strength of all
8 algorithms by ANOVA statistical tool.
| [
{
"version": "v1",
"created": "Thu, 25 Feb 2016 03:20:00 GMT"
}
] | 1,456,704,000,000 | [
[
"Ali",
"Mumtaz",
""
],
[
"Van Minh",
"Nguyen",
""
],
[
"Son",
"Le Hoang",
""
]
] |
1602.08610 | Hongyu Yang | Hongyu Yang, Cynthia Rudin, Margo Seltzer | Scalable Bayesian Rule Lists | 31 pages, 19 figures | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present an algorithm for building probabilistic rule lists that is two
orders of magnitude faster than previous work. Rule list algorithms are
competitors for decision tree algorithms. They are associative classifiers, in
that they are built from pre-mined association rules. They have a logical
structure that is a sequence of IF-THEN rules, identical to a decision list or
one-sided decision tree. Instead of using greedy splitting and pruning like
decision tree algorithms, we fully optimize over rule lists, striking a
practical balance between accuracy, interpretability, and computational speed.
The algorithm presented here uses a mixture of theoretical bounds (tight enough
to have practical implications as a screening or bounding procedure),
computational reuse, and highly tuned language libraries to achieve
computational efficiency. Currently, for many practical problems, this method
achieves better accuracy and sparsity than decision trees; further, in many
cases, the computational time is practical and often less than that of decision
trees. The result is a probabilistic classifier (which estimates P(y = 1|x) for
each x) that optimizes the posterior of a Bayesian hierarchical model over rule
lists.
| [
{
"version": "v1",
"created": "Sat, 27 Feb 2016 16:29:24 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Apr 2017 07:01:26 GMT"
}
] | 1,491,264,000,000 | [
[
"Yang",
"Hongyu",
""
],
[
"Rudin",
"Cynthia",
""
],
[
"Seltzer",
"Margo",
""
]
] |
1602.09076 | Paolo Campigotto | Paolo Campigotto, Christian Rudloff, Maximilian Leodolter and Dietmar
Bauer | Personalized and situation-aware multimodal route recommendations: the
FAVOUR algorithm | 12 pages, 6 figures, 1 table. Submitted to IEEE Transactions on
Intelligent Transportation Systems journal for publication | null | 10.1109/TITS.2016.2565643 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Route choice in multimodal networks shows a considerable variation between
different individuals as well as the current situational context.
Personalization of recommendation algorithms are already common in many areas,
e.g., online retail. However, most online routing applications still provide
shortest distance or shortest travel-time routes only, neglecting individual
preferences as well as the current situation. Both aspects are of particular
importance in a multimodal setting as attractivity of some transportation modes
such as biking crucially depends on personal characteristics and exogenous
factors like the weather. This paper introduces the FAVourite rOUte
Recommendation (FAVOUR) approach to provide personalized, situation-aware route
proposals based on three steps: first, at the initialization stage, the user
provides limited information (home location, work place, mobility options,
sociodemographics) used to select one out of a small number of initial
profiles. Second, based on this information, a stated preference survey is
designed in order to sharpen the profile. In this step a mass preference prior
is used to encode the prior knowledge on preferences from the class identified
in step one. And third, subsequently the profile is continuously updated during
usage of the routing services. The last two steps use Bayesian learning
techniques in order to incorporate information from all contributing
individuals. The FAVOUR approach is presented in detail and tested on a small
number of survey participants. The experimental results on this real-world
dataset show that FAVOUR generates better-quality recommendations w.r.t.
alternative learning algorithms from the literature. In particular the
definition of the mass preference prior for initialization of step two is shown
to provide better predictions than a number of alternatives from the
literature.
| [
{
"version": "v1",
"created": "Mon, 29 Feb 2016 18:16:12 GMT"
}
] | 1,479,427,200,000 | [
[
"Campigotto",
"Paolo",
""
],
[
"Rudloff",
"Christian",
""
],
[
"Leodolter",
"Maximilian",
""
],
[
"Bauer",
"Dietmar",
""
]
] |
1603.00772 | Azad Naik | Azad Naik, Huzefa Rangwala | Filter based Taxonomy Modification for Improving Hierarchical
Classification | The conference version of the paper is submitted for publication | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical Classification (HC) is a supervised learning problem where
unlabeled instances are classified into a taxonomy of classes. Several methods
that utilize the hierarchical structure have been developed to improve the HC
performance. However, in most cases apriori defined hierarchical structure by
domain experts is inconsistent; as a consequence performance improvement is not
noticeable in comparison to flat classification methods. We propose a scalable
data-driven filter based rewiring approach to modify an expert-defined
hierarchy. Experimental comparisons of top-down HC with our modified hierarchy,
on a wide range of datasets shows classification performance improvement over
the baseline hierarchy (i:e:, defined by expert), clustered hierarchy and
flattening based hierarchy modification approaches. In comparison to existing
rewiring approaches, our developed method (rewHier) is computationally
efficient, enabling it to scale to datasets with large numbers of classes,
instances and features. We also show that our modified hierarchy leads to
improved classification performance for classes with few training samples in
comparison to flat and state-of-the-art HC approaches.
| [
{
"version": "v1",
"created": "Wed, 2 Mar 2016 16:14:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Jun 2016 06:41:42 GMT"
},
{
"version": "v3",
"created": "Sat, 15 Oct 2016 06:21:54 GMT"
}
] | 1,476,748,800,000 | [
[
"Naik",
"Azad",
""
],
[
"Rangwala",
"Huzefa",
""
]
] |
1603.01182 | Filipe Alves Neto Verri | Filipe Alves Neto Verri, Paulo Roberto Urio, Liang Zhao | Network Unfolding Map by Edge Dynamics Modeling | Published version in http://ieeexplore.ieee.org/document/7762202/ | IEEE Transactions on Neural Networks and Learning Systems, vol.
29, no. 2, pp. 405-418, Feb. 2018. doi: 10.1109/TNNLS.2016.2626341 | 10.1109/TNNLS.2016.2626341 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emergence of collective dynamics in neural networks is a mechanism of the
animal and human brain for information processing. In this paper, we develop a
computational technique using distributed processing elements in a complex
network, which are called particles, to solve semi-supervised learning
problems. Three actions govern the particles' dynamics: generation, walking,
and absorption. Labeled vertices generate new particles that compete against
rival particles for edge domination. Active particles randomly walk in the
network until they are absorbed by either a rival vertex or an edge currently
dominated by rival particles. The result from the model evolution consists of
sets of edges arranged by the label dominance. Each set tends to form a
connected subnetwork to represent a data class. Although the intrinsic dynamics
of the model is a stochastic one, we prove there exists a deterministic version
with largely reduced computational complexity; specifically, with linear
growth. Furthermore, the edge domination process corresponds to an unfolding
map in such way that edges "stretch" and "shrink" according to the vertex-edge
dynamics. Consequently, the unfolding effect summarizes the relevant
relationships between vertices and the uncovered data classes. The proposed
model captures important details of connectivity patterns over the vertex-edge
dynamics evolution, in contrast to previous approaches which focused on only
vertex or only edge dynamics. Computer simulations reveal that the new model
can identify nonlinear features in both real and artificial data, including
boundaries between distinct classes and overlapping structures of data.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 17:11:23 GMT"
},
{
"version": "v2",
"created": "Mon, 19 Feb 2018 12:02:21 GMT"
}
] | 1,519,084,800,000 | [
[
"Verri",
"Filipe Alves Neto",
""
],
[
"Urio",
"Paulo Roberto",
""
],
[
"Zhao",
"Liang",
""
]
] |
1603.01228 | Zolt\'an Kov\'acs | Zolt\'an Kov\'acs, Csilla S\'olyom-Gecse | GeoGebra Tools with Proof Capabilities | 22 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report about significant enhancements of the complex algebraic geometry
theorem proving subsystem in GeoGebra for automated proofs in Euclidean
geometry, concerning the extension of numerous GeoGebra tools with proof
capabilities. As a result, a number of elementary theorems can be proven by
using GeoGebra's intuitive user interface on various computer architectures
including native Java and web based systems with JavaScript. We also provide a
test suite for benchmarking our results with 200 test cases.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 19:29:08 GMT"
}
] | 1,457,049,600,000 | [
[
"Kovács",
"Zoltán",
""
],
[
"Sólyom-Gecse",
"Csilla",
""
]
] |
1603.01312 | Rob Fergus | Adam Lerer, Sam Gross and Rob Fergus | Learning Physical Intuition of Block Towers by Example | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wooden blocks are a common toy for infants, allowing them to develop motor
skills and gain intuition about the physical behavior of the world. In this
paper, we explore the ability of deep feed-forward models to learn such
intuitive physics. Using a 3D game engine, we create small towers of wooden
blocks whose stability is randomized and render them collapsing (or remaining
upright). This data allows us to train large convolutional network models which
can accurately predict the outcome, as well as estimating the block
trajectories. The models are also able to generalize in two important ways: (i)
to new physical scenarios, e.g. towers with an additional block and (ii) to
images of real wooden blocks, where it obtains a performance comparable to
human subjects.
| [
{
"version": "v1",
"created": "Thu, 3 Mar 2016 22:59:35 GMT"
}
] | 1,457,308,800,000 | [
[
"Lerer",
"Adam",
""
],
[
"Gross",
"Sam",
""
],
[
"Fergus",
"Rob",
""
]
] |
1603.01722 | Paolo Pareti Mr. | Paolo Pareti, Ewan Klein, Adam Barker | A Linked Data Scalability Challenge: Concept Reuse Leads to Semantic
Decay | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The increasing amount of available Linked Data resources is laying the
foundations for more advanced Semantic Web applications. One of their main
limitations, however, remains the general low level of data quality. In this
paper we focus on a measure of quality which is negatively affected by the
increase of the available resources. We propose a measure of semantic richness
of Linked Data concepts and we demonstrate our hypothesis that the more a
concept is reused, the less semantically rich it becomes. This is a significant
scalability issue, as one of the core aspects of Linked Data is the propagation
of semantic information on the Web by reusing common terms. We prove our
hypothesis with respect to our measure of semantic richness and we validate our
model empirically. Finally, we suggest possible future directions to address
this scalability problem.
| [
{
"version": "v1",
"created": "Sat, 5 Mar 2016 12:50:22 GMT"
}
] | 1,457,395,200,000 | [
[
"Pareti",
"Paolo",
""
],
[
"Klein",
"Ewan",
""
],
[
"Barker",
"Adam",
""
]
] |
1603.02738 | Matthew Guzdial | Matthew Guzdial and Mark Riedl | Learning to Blend Computer Game Levels | 8 pages, 11 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an approach to generate novel computer game levels that blend
different game concepts in an unsupervised fashion. Our primary contribution is
an analogical reasoning process to construct blends between level design models
learned from gameplay videos. The models represent probabilistic relationships
between elements in the game. An analogical reasoning process maps features
between two models to produce blended models that can then generate new level
chunks. As a proof-of-concept we train our system on the classic platformer
game Super Mario Bros. due to its highly-regarded and well understood level
design. We evaluate the extent to which the models represent stylistic level
design knowledge and demonstrate the ability of our system to explain levels
that were blended by human expert designers.
| [
{
"version": "v1",
"created": "Tue, 8 Mar 2016 23:19:50 GMT"
}
] | 1,457,568,000,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Riedl",
"Mark",
""
]
] |
1603.03267 | Vicen\c{c} G\'omez Cerd\`a | Anders Jonsson, Vicen\c{c} G\'omez | Hierarchical Linearly-Solvable Markov Decision Problems | 11 pages, 6 figures, 26th International Conference on Automated
Planning and Scheduling | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a hierarchical reinforcement learning framework that formulates
each task in the hierarchy as a special type of Markov decision process for
which the Bellman equation is linear and has analytical solution. Problems of
this type, called linearly-solvable MDPs (LMDPs) have interesting properties
that can be exploited in a hierarchical setting, such as efficient learning of
the optimal value function or task compositionality. The proposed hierarchical
approach can also be seen as a novel alternative to solving LMDPs with large
state spaces. We derive a hierarchical version of the so-called Z-learning
algorithm that learns different tasks simultaneously and show empirically that
it significantly outperforms the state-of-the-art learning methods in two
classical hierarchical reinforcement learning domains: the taxi domain and an
autonomous guided vehicle task.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2016 13:50:31 GMT"
}
] | 1,457,654,400,000 | [
[
"Jonsson",
"Anders",
""
],
[
"Gómez",
"Vicenç",
""
]
] |
1603.03511 | Yi Zhou Dr. | Yi Zhou | A Set Theoretic Approach for Knowledge Representation: the
Representation Part | This paper targets an ambitious goal to rebuild a foundation of
knowledge representation based on set theory rather than classical logic. Any
comments are welcome | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a set theoretic approach for knowledge
representation. While the syntax of an application domain is captured by set
theoretic constructs including individuals, concepts and operators, knowledge
is formalized by equality assertions. We first present a primitive form that
uses minimal assumed knowledge and constructs. Then, assuming naive set theory,
we extend it by definitions, which are special kinds of knowledge.
Interestingly, we show that the primitive form is expressive enough to define
logic operators, not only propositional connectives but also quantifiers.
| [
{
"version": "v1",
"created": "Fri, 11 Mar 2016 03:22:12 GMT"
}
] | 1,457,913,600,000 | [
[
"Zhou",
"Yi",
""
]
] |
1603.03518 | Peng Yang | Peng Yang, Ke Tang, Xin Yao | High-dimensional Black-box Optimization via Divide and Approximate
Conquer | 7 pages, 2 figures, conference | IEEE Transactions on Evolutionary Computation, 2018, 22(1):
143-156 | 10.1109/TEVC.2017.2672689 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Divide and Conquer (DC) is conceptually well suited to high-dimensional
optimization by decomposing a problem into multiple small-scale sub-problems.
However, appealing performance can be seldom observed when the sub-problems are
interdependent. This paper suggests that the major difficulty of tackling
interdependent sub-problems lies in the precise evaluation of a partial
solution (to a sub-problem), which can be overwhelmingly costly and thus makes
sub-problems non-trivial to conquer. Thus, we propose an approximation
approach, named Divide and Approximate Conquer (DAC), which reduces the cost of
partial solution evaluation from exponential time to polynomial time.
Meanwhile, the convergence to the global optimum (of the original problem) is
still guaranteed. The effectiveness of DAC is demonstrated empirically on two
sets of non-separable high-dimensional problems.
| [
{
"version": "v1",
"created": "Fri, 11 Mar 2016 04:50:59 GMT"
},
{
"version": "v2",
"created": "Mon, 21 Mar 2016 02:06:09 GMT"
}
] | 1,531,353,600,000 | [
[
"Yang",
"Peng",
""
],
[
"Tang",
"Ke",
""
],
[
"Yao",
"Xin",
""
]
] |
1603.03729 | Vasile Patrascu | Vasile Patrascu | Penta and Hexa Valued Representation of Neutrosophic Information | null | null | 10.13140/RG.2.1.2667.1762 | IT.1.3.2016 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting from the primary representation of neutrosophic information, namely
the degree of truth, degree of indeterminacy and degree of falsity, we define a
nuanced representation in a penta valued fuzzy space, described by the index of
truth, index of falsity, index of ignorance, index of contradiction and index
of hesitation. Also, it was constructed an associated penta valued logic and
then using this logic, it was defined for the proposed penta valued structure
the following operators: union, intersection, negation, complement and dual.
Then, the penta valued representation is extended to a hexa valued one, adding
the sixth component, namely the index of ambiguity.
| [
{
"version": "v1",
"created": "Thu, 10 Mar 2016 04:18:38 GMT"
}
] | 1,457,913,600,000 | [
[
"Patrascu",
"Vasile",
""
]
] |
1603.04110 | Seyed Morteza Mousavi Barroudi | Seyed Morteza Mousavi, Aaron Harwood, Shanika Karunasekera, Mojtaba
Maghrebi | Geometry of Interest (GOI): Spatio-Temporal Destination Extraction and
Partitioning in GPS Trajectory Data | A version of this technical report has been submitted to the Springer
Journal of Ambient Intelligence and Humanized Computing and it is under
review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nowadays large amounts of GPS trajectory data is being continuously collected
by GPS-enabled devices such as vehicles navigation systems and mobile phones.
GPS trajectory data is useful for applications such as traffic management,
location forecasting, and itinerary planning. Such applications often need to
extract the time-stamped Sequence of Visited Locations (SVLs) of the mobile
objects. The nearest neighbor query (NNQ) is the most applied method for
labeling the visited locations based on the IDs of the POIs in the process of
SVL generation. NNQ in some scenarios is not accurate enough. To improve the
quality of the extracted SVLs, instead of using NNQ, we label the visited
locations as the IDs of the POIs which geometrically intersect with the GPS
observations. Intersection operator requires the accurate geometry of the
points of interest which we refer to them as the Geometries of Interest (GOIs).
In some application domains (e.g. movement trajectories of animals), adequate
information about the POIs and their GOIs may not be available a priori, or
they may not be publicly accessible and, therefore, they need to be derived
from GPS trajectory data. In this paper we propose a novel method for
estimating the POIs and their GOIs, which consists of three phases: (i)
extracting the geometries of the stay regions; (ii) constructing the geometry
of destination regions based on the extracted stay regions; and (iii)
constructing the GOIs based on the geometries of the destination regions. Using
the geometric similarity to known GOIs as the major evaluation criterion, the
experiments we performed using long-term GPS trajectory data show that our
method outperforms the existing approaches.
| [
{
"version": "v1",
"created": "Mon, 14 Mar 2016 01:52:28 GMT"
},
{
"version": "v2",
"created": "Mon, 16 May 2016 20:24:07 GMT"
}
] | 1,463,529,600,000 | [
[
"Mousavi",
"Seyed Morteza",
""
],
[
"Harwood",
"Aaron",
""
],
[
"Karunasekera",
"Shanika",
""
],
[
"Maghrebi",
"Mojtaba",
""
]
] |
1603.04402 | Abhishek Sharma | Abhishek Sharma, Michael Witbrock, Keith Goolsbey | Controlling Search in Very large Commonsense Knowledge Bases: A Machine
Learning Approach | 6 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Very large commonsense knowledge bases (KBs) often have thousands to millions
of axioms, of which relatively few are relevant for answering any given query.
A large number of irrelevant axioms can easily overwhelm resolution-based
theorem provers. Therefore, methods that help the reasoner identify useful
inference paths form an essential part of large-scale reasoning systems. In
this paper, we describe two ordering heuristics for optimization of reasoning
in such systems. First, we discuss how decision trees can be used to select
inference steps that are more likely to succeed. Second, we identify a small
set of problem instance features that suffice to guide searches away from
intractable regions of the search space. We show the efficacy of these
techniques via experiments on thousands of queries from the Cyc KB. Results
show that these methods lead to an order of magnitude reduction in inference
time.
| [
{
"version": "v1",
"created": "Mon, 14 Mar 2016 19:20:36 GMT"
}
] | 1,458,000,000,000 | [
[
"Sharma",
"Abhishek",
""
],
[
"Witbrock",
"Michael",
""
],
[
"Goolsbey",
"Keith",
""
]
] |
1603.06459 | Nguyen Thi Thanh Dang | Nguyen Thi Thanh Dang, Patrick De Causmaecker | Characterization of neighborhood behaviours in a multi-neighborhood
local search algorithm | 13 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a multi-neighborhood local search algorithm with a large number
of possible neighborhoods. Each neighborhood is accompanied by a weight value
which represents the probability of being chosen at each iteration. These
weights are fixed before the algorithm runs, and are considered as parameters
of the algorithm. Given a set of instances, off-line tuning of the algorithm's
parameters can be done by automated algorithm configuration tools (e.g., SMAC).
However, the large number of neighborhoods can make the tuning expensive and
difficult even when the number of parameters has been reduced by some
intuition. In this work, we propose a systematic method to characterize each
neighborhood's behaviours, representing them as a feature vector, and using
cluster analysis to form similar groups of neighborhoods. The novelty of our
characterization method is the ability of reflecting changes of behaviours
according to hardness of different solution quality regions. We show that using
neighborhood clusters instead of individual neighborhoods helps to reduce the
parameter configuration space without misleading the search of the tuning
procedure. Moreover, this method is problem-independent and potentially can be
applied in similar contexts.
| [
{
"version": "v1",
"created": "Sat, 12 Mar 2016 12:38:32 GMT"
}
] | 1,458,604,800,000 | [
[
"Dang",
"Nguyen Thi Thanh",
""
],
[
"De Causmaecker",
"Patrick",
""
]
] |
1603.07029 | Michael Wiser | Michael J Wiser, Louise S Mead, James J Smith, Robert T Pennock | Comparing Human and Automated Evaluation of Open-Ended Student Responses
to Questions of Evolution | Submitted to ALife 2016 | Artificial Life XV: Proceedings of the Fifteenth International
Conference on Artificial life. pp. 116 - 122. MIT Press. 2016 | 10.7551/978-0-262-33936-0-ch025 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Written responses can provide a wealth of data in understanding student
reasoning on a topic. Yet they are time- and labor-intensive to score,
requiring many instructors to forego them except as limited parts of summative
assessments at the end of a unit or course. Recent developments in Machine
Learning (ML) have produced computational methods of scoring written responses
for the presence or absence of specific concepts. Here, we compare the scores
from one particular ML program -- EvoGrader -- to human scoring of responses to
structurally- and content-similar questions that are distinct from the ones the
program was trained on. We find that there is substantial inter-rater
reliability between the human and ML scoring. However, sufficient systematic
differences remain between the human and ML scoring that we advise only using
the ML scoring for formative, rather than summative, assessment of student
reasoning.
| [
{
"version": "v1",
"created": "Tue, 22 Mar 2016 23:36:02 GMT"
}
] | 1,525,737,600,000 | [
[
"Wiser",
"Michael J",
""
],
[
"Mead",
"Louise S",
""
],
[
"Smith",
"James J",
""
],
[
"Pennock",
"Robert T",
""
]
] |
1603.07417 | Stephen Makonin | Md. Zulfiquar Ali Bhotto, Stephen Makonin, Ivan V. Bajic | Load Disaggregation Based on Aided Linear Integer Programming | null | null | 10.1109/TCSII.2016.2603479 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Load disaggregation based on aided linear integer programming (ALIP) is
proposed. We start with a conventional linear integer programming (IP) based
disaggregation and enhance it in several ways. The enhancements include
additional constraints, correction based on a state diagram, median filtering,
and linear programming-based refinement. With the aid of these enhancements,
the performance of IP-based disaggregation is significantly improved. The
proposed ALIP system relies only on the instantaneous load samples instead of
waveform signatures, and hence does not crucially depend on high sampling
frequency. Experimental results show that the proposed ALIP system performs
better than the conventional IP-based load disaggregation system.
| [
{
"version": "v1",
"created": "Thu, 24 Mar 2016 02:54:45 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Aug 2016 00:27:59 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Aug 2016 16:32:57 GMT"
}
] | 1,472,601,600,000 | [
[
"Bhotto",
"Md. Zulfiquar Ali",
""
],
[
"Makonin",
"Stephen",
""
],
[
"Bajic",
"Ivan V.",
""
]
] |
1603.08714 | Kristijonas \v{C}yras | Kristijonas Cyras, Francesca Toni | Properties of ABA+ for Non-Monotonic Reasoning | This is a revised version of the paper presented at the workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate properties of ABA+, a formalism that extends the well studied
structured argumentation formalism Assumption-Based Argumentation (ABA) with a
preference handling mechanism. In particular, we establish desirable properties
that ABA+ semantics exhibit. These pave way to the satisfaction by ABA+ of some
(arguably) desirable principles of preference handling in argumentation and
nonmonotonic reasoning, as well as non-monotonic inference properties of ABA+
under various semantics.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 10:37:38 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Jan 2017 15:59:11 GMT"
},
{
"version": "v3",
"created": "Sun, 5 Nov 2017 12:04:57 GMT"
}
] | 1,510,012,800,000 | [
[
"Cyras",
"Kristijonas",
""
],
[
"Toni",
"Francesca",
""
]
] |
1603.08789 | Jean-Guy Mailly | Jean-Guy Mailly | Using Enthymemes to Fill the Gap between Logical Argumentation and
Revision of Abstract Argumentation Frameworks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a preliminary work on an approach to fill the gap
between logic-based argumentation and the numerous approaches to tackle the
dynamics of abstract argumentation frameworks. Our idea is that, even when
arguments and attacks are defined by means of a logical belief base, there may
be some uncertainty about how accurate is the content of an argument, and so
the presence (or absence) of attacks concerning it. We use enthymemes to
illustrate this notion of uncertainty of arguments and attacks. Indeed, as
argued in the literature, real arguments are often enthymemes instead of
completely specified deductive arguments. This means that some parts of the
pair (support, claim) may be missing because they are supposed to belong to
some "common knowledge", and then should be deduced by the agent which receives
the enthymeme. But the perception that agents have of the common knowledge may
be wrong, and then a first agent may state an enthymeme that her opponent is
not able to decode in an accurate way. It is likely that the decoding of the
enthymeme by the agent leads to mistaken attacks between this new argument and
the existing ones. In this case, the agent can receive some information about
attacks or arguments acceptance statuses which disagree with her argumentation
framework. We exemplify a way to incorporate this new piece of information by
means of existing works on the dynamics of abstract argumentation frameworks.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 14:29:00 GMT"
}
] | 1,459,296,000,000 | [
[
"Mailly",
"Jean-Guy",
""
]
] |
1603.08869 | Tiancheng Zhao | Tiancheng Zhao, Mohammad Gowayyed | Algorithms for Batch Hierarchical Reinforcement Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Hierarchical Reinforcement Learning (HRL) exploits temporal abstraction to
solve large Markov Decision Processes (MDP) and provide transferable subtask
policies. In this paper, we introduce an off-policy HRL algorithm: Hierarchical
Q-value Iteration (HQI). We show that it is possible to effectively learn
recursive optimal policies for any valid hierarchical decomposition of the
original MDP, given a fixed dataset collected from a flat stochastic behavioral
policy. We first formally prove the convergence of the algorithm for tabular
MDP. Then our experiments on the Taxi domain show that HQI converges faster
than a flat Q-value Iteration and enjoys easy state abstraction. Also, we
demonstrate that our algorithm is able to learn optimal policies for different
hierarchical structures from the same fixed dataset, which enables model
comparison without recollecting data.
| [
{
"version": "v1",
"created": "Tue, 29 Mar 2016 18:17:17 GMT"
}
] | 1,459,296,000,000 | [
[
"Zhao",
"Tiancheng",
""
],
[
"Gowayyed",
"Mohammad",
""
]
] |
1603.09194 | \"Ozg\"ur L\"utf\"u \"Oz\c{c}ep | \"Ozg\"ur L\"utf\"u \"Oz\c{c}ep | Iterated Ontology Revision by Reinterpretation | 10 pages, 1 figure, to be published in Proceedings of the 16th
International Workshop on Non-Monotonic Reasoning (NMR'16) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Iterated applications of belief change operators are essential for different
scenarios such as that of ontology evolution where new information is not
presented at once but only in piecemeal fashion within a sequence. I discuss
iterated applications of so called reinterpretation operators that trace
conflicts between ontologies back to the ambiguous of symbols and that provide
conflict resolution strategies with bridging axioms. The discussion centers on
adaptations of the classical iteration postulates according to Darwiche and
Pearl. The main result of the paper is that reinterpretation operators fulfill
the postulates for sequences containing only atomic triggers. For complex
triggers, a fulfillment is not guaranteed and indeed there are different
reasons for the different postulates why they should not be fulfilled in the
particular scenario of ontology revision with well developed ontologies.
| [
{
"version": "v1",
"created": "Wed, 30 Mar 2016 13:50:13 GMT"
}
] | 1,459,382,400,000 | [
[
"Özçep",
"Özgür Lütfü",
""
]
] |
1603.09429 | Aaron Hunter | Aaron Hunter | Ordinal Conditional Functions for Nearly Counterfactual Revision | 7 pages, 1 figure, presented at the International Workshop on
Non-monotonic Reasoning 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in belief revision involving conditional statements where
the antecedent is almost certainly false. To represent such problems, we use
Ordinal Conditional Functions that may take infinite values. We model belief
change in this context through simple arithmetical operations that allow us to
capture the intuition that certain antecedents can not be validated by any
number of observations. We frame our approach as a form of finite belief
improvement, and we propose a model of conditional belief revision in which
only the "right" hypothetical levels of implausibility are revised.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 00:48:40 GMT"
}
] | 1,459,468,800,000 | [
[
"Hunter",
"Aaron",
""
]
] |
1603.09502 | Thomas Linsbichler | Ringo Baumann, Thomas Linsbichler and Stefan Woltran | Verifiability of Argumentation Semantics | Contribution to the 16h International Workshop on Non-Monotonic
Reasoning, 2016, Cape Town | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dung's abstract argumentation theory is a widely used formalism to model
conflicting information and to draw conclusions in such situations. Hereby, the
knowledge is represented by so-called argumentation frameworks (AFs) and the
reasoning is done via semantics extracting acceptable sets. All reasonable
semantics are based on the notion of conflict-freeness which means that
arguments are only jointly acceptable when they are not linked within the AF.
In this paper, we study the question which information on top of conflict-free
sets is needed to compute extensions of a semantics at hand. We introduce a
hierarchy of so-called verification classes specifying the required amount of
information. We show that well-known standard semantics are exactly verifiable
through a certain such class. Our framework also gives a means to study
semantics lying inbetween known semantics, thus contributing to a more abstract
understanding of the different features argumentation semantics offer.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 09:29:02 GMT"
}
] | 1,459,468,800,000 | [
[
"Baumann",
"Ringo",
""
],
[
"Linsbichler",
"Thomas",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1603.09511 | Jean-Guy Mailly | Adrian Haret, Jean-Guy Mailly, Stefan Woltran | Distributing Knowledge into Simple Bases | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the behavior of belief change operators for fragments of
classical logic has received increasing interest over the last years. Results
in this direction are mainly concerned with adapting representation theorems.
However, fragment-driven belief change also leads to novel research questions.
In this paper we propose the concept of belief distribution, which can be
understood as the reverse task of merging. More specifically, we are interested
in the following question: given an arbitrary knowledge base $K$ and some
merging operator $\Delta$, can we find a profile $E$ and a constraint $\mu$,
both from a given fragment of classical logic, such that $\Delta_\mu(E)$ yields
a result equivalent to $K$? In other words, we are interested in seeing if $K$
can be distributed into knowledge bases of simpler structure, such that the
task of merging allows for a reconstruction of the original knowledge. Our
initial results show that merging based on drastic distance allows for an easy
distribution of knowledge, while the power of distribution for operators based
on Hamming distance relies heavily on the fragment of choice.
| [
{
"version": "v1",
"created": "Thu, 31 Mar 2016 09:59:02 GMT"
}
] | 1,459,468,800,000 | [
[
"Haret",
"Adrian",
""
],
[
"Mailly",
"Jean-Guy",
""
],
[
"Woltran",
"Stefan",
""
]
] |
1603.09728 | Shafi'i Muhammad Abdulhamid Mr | Shafii Muhammad Abdulhamid, Muhammad Shafie Abd Latiff, Syed Hamid
Hussain Madni, Osho Oluwafemi | A Survey of League Championship Algorithm: Prospects and Challenges | 10 pages, 2 figures, 2 tables, Indian Journal of Science and
Technology, 2015 | null | 10.17485/ijst/2015/v8iS3/60476 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The League Championship Algorithm (LCA) is sport-inspired optimization
algorithm that was introduced by Ali Husseinzadeh Kashan in the year 2009. It
has since drawn enormous interest among the researchers because of its
potential efficiency in solving many optimization problems and real-world
applications. The LCA has also shown great potentials in solving
non-deterministic polynomial time (NP-complete) problems. This survey presents
a brief synopsis of the LCA literatures in peer-reviewed journals, conferences
and book chapters. These research articles are then categorized according to
indexing in the major academic databases (Web of Science, Scopus, IEEE Xplore
and the Google Scholar). The analysis was also done to explore the prospects
and the challenges of the algorithm and its acceptability among researchers.
This systematic categorization can be used as a basis for future studies.
| [
{
"version": "v1",
"created": "Sat, 18 Jul 2015 10:09:11 GMT"
}
] | 1,459,468,800,000 | [
[
"Abdulhamid",
"Shafii Muhammad",
""
],
[
"Latiff",
"Muhammad Shafie Abd",
""
],
[
"Madni",
"Syed Hamid Hussain",
""
],
[
"Oluwafemi",
"Osho",
""
]
] |
1604.00300 | Benjamin Negrevergne | R\'emi Coletta and Benjamin Negrevergne | A SAT model to mine flexible sequences in transactional datasets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional pattern mining algorithms generally suffer from a lack of
flexibility. In this paper, we propose a SAT formulation of the problem to
successfully mine frequent flexible sequences occurring in transactional
datasets. Our SAT-based approach can easily be extended with extra constraints
to address a broad range of pattern mining applications. To demonstrate this
claim, we formulate and add several constraints, such as gap and span
constraints, to our model in order to extract more specific patterns. We also
use interactive solving to perform important derived tasks, such as closed
pattern mining or maximal pattern mining. Finally, we prove the practical
feasibility of our SAT model by running experiments on two real datasets.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 15:49:51 GMT"
}
] | 1,459,728,000,000 | [
[
"Coletta",
"Rémi",
""
],
[
"Negrevergne",
"Benjamin",
""
]
] |
1604.00301 | Valentina Gliozzi | Valentina Gliozzi | A strengthening of rational closure in DLs: reasoning about multiple
aspects | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a logical analysis of the concept of typicality, central in human
cognition (Rosch,1978). We start from a previously proposed extension of the
basic Description Logic ALC (a computationally tractable fragment of First
Order Logic, used to represent concept inclusions and ontologies) with a
typicality operator T that allows to consistently represent the attribution to
classes of individuals of properties with exceptions (as in the classic example
(i) typical birds fly, (ii) penguins are birds but (iii) typical penguins don't
fly). We then strengthen this extension in order to separately reason about the
typicality with respect to different aspects (e.g., flying, having nice
feather: in the previous example, penguins may not inherit the property of
flying, for which they are exceptional, but can nonetheless inherit other
properties, such as having nice feather).
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 15:50:24 GMT"
}
] | 1,459,728,000,000 | [
[
"Gliozzi",
"Valentina",
""
]
] |
1604.00377 | Jin-Kao Hao | Yangming Zhou, Jin-Kao Hao, B\'eatrice Duval | Reinforcement learning based local search for grouping problems: A case
study on graph coloring | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grouping problems aim to partition a set of items into multiple mutually
disjoint subsets according to some specific criterion and constraints. Grouping
problems cover a large class of important combinatorial optimization problems
that are generally computationally difficult. In this paper, we propose a
general solution approach for grouping problems, i.e., reinforcement learning
based local search (RLS), which combines reinforcement learning techniques with
descent-based local search. The viability of the proposed approach is verified
on a well-known representative grouping problem (graph coloring) where a very
simple descent-based coloring algorithm is applied. Experimental studies on
popular DIMACS and COLOR02 benchmark graphs indicate that RLS achieves
competitive performances compared to a number of well-known coloring
algorithms.
| [
{
"version": "v1",
"created": "Fri, 1 Apr 2016 19:38:35 GMT"
}
] | 1,459,728,000,000 | [
[
"Zhou",
"Yangming",
""
],
[
"Hao",
"Jin-Kao",
""
],
[
"Duval",
"Béatrice",
""
]
] |
1604.00545 | Janos Kramar | James Babcock, Janos Kramar, Roman Yampolskiy | The AGI Containment Problem | null | Lecture Notes in Artificial Intelligence 9782 (AGI 2016,
Proceedings) 53-63 | 10.1007/978-3-319-41649-6 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is considerable uncertainty about what properties, capabilities and
motivations future AGIs will have. In some plausible scenarios, AGIs may pose
security risks arising from accidents and defects. In order to mitigate these
risks, prudent early AGI research teams will perform significant testing on
their creations before use. Unfortunately, if an AGI has human-level or greater
intelligence, testing itself may not be safe; some natural AGI goal systems
create emergent incentives for AGIs to tamper with their test environments,
make copies of themselves on the internet, or convince developers and operators
to do dangerous things. In this paper, we survey the AGI containment problem -
the question of how to build a container in which tests can be conducted safely
and reliably, even on AGIs with unknown motivations and capabilities that could
be dangerous. We identify requirements for AGI containers, available
mechanisms, and weaknesses that need to be addressed.
| [
{
"version": "v1",
"created": "Sat, 2 Apr 2016 19:26:05 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2016 15:37:38 GMT"
},
{
"version": "v3",
"created": "Wed, 13 Jul 2016 14:54:16 GMT"
}
] | 1,468,454,400,000 | [
[
"Babcock",
"James",
""
],
[
"Kramar",
"Janos",
""
],
[
"Yampolskiy",
"Roman",
""
]
] |
1604.00681 | Edmond Awad | Edmond Awad, Jean-Fran\c{c}ois Bonnefon, Martin Caminada, Thomas
Malone and Iyad Rahwan | Experimental Assessment of Aggregation Principles in
Argumentation-enabled Collective Intelligence | null | ACM Transactions on Internet Technology (TOIT), 17(3), 29 (2017) | 10.1145/3053371 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | On the Web, there is always a need to aggregate opinions from the crowd (as
in posts, social networks, forums, etc.). Different mechanisms have been
implemented to capture these opinions such as "Like" in Facebook, "Favorite" in
Twitter, thumbs-up/down, flagging, and so on. However, in more contested
domains (e.g. Wikipedia, political discussion, and climate change discussion)
these mechanisms are not sufficient since they only deal with each issue
independently without considering the relationships between different claims.
We can view a set of conflicting arguments as a graph in which the nodes
represent arguments and the arcs between these nodes represent the defeat
relation. A group of people can then collectively evaluate such graphs. To do
this, the group must use a rule to aggregate their individual opinions about
the entire argument graph. Here, we present the first experimental evaluation
of different principles commonly employed by aggregation rules presented in the
literature. We use randomized controlled experiments to investigate which
principles people consider better at aggregating opinions under different
conditions. Our analysis reveals a number of factors, not captured by
traditional formal models, that play an important role in determining the
efficacy of aggregation. These results help bring formal models of
argumentation closer to real-world application.
| [
{
"version": "v1",
"created": "Sun, 3 Apr 2016 19:58:18 GMT"
},
{
"version": "v2",
"created": "Sun, 12 Feb 2017 18:38:35 GMT"
}
] | 1,497,916,800,000 | [
[
"Awad",
"Edmond",
""
],
[
"Bonnefon",
"Jean-François",
""
],
[
"Caminada",
"Martin",
""
],
[
"Malone",
"Thomas",
""
],
[
"Rahwan",
"Iyad",
""
]
] |
1604.00693 | Edmond Awad | Edmond Awad, Martin Caminada, Gabriella Pigozzi, Miko{\l}aj
Podlaszewski and Iyad Rahwan | Pareto Optimality and Strategy Proofness in Group Argument Evaluation
(Extended Version) | null | null | 10.1093/logcom/exx017 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An inconsistent knowledge base can be abstracted as a set of arguments and a
defeat relation among them. There can be more than one consistent way to
evaluate such an argumentation graph. Collective argument evaluation is the
problem of aggregating the opinions of multiple agents on how a given set of
arguments should be evaluated. It is crucial not only to ensure that the
outcome is logically consistent, but also satisfies measures of social
optimality and immunity to strategic manipulation. This is because agents have
their individual preferences about what the outcome ought to be. In the current
paper, we analyze three previously introduced argument-based aggregation
operators with respect to Pareto optimality and strategy proofness under
different general classes of agent preferences. We highlight fundamental
trade-offs between strategic manipulability and social optimality on one hand,
and classical logical criteria on the other. Our results motivate further
investigation into the relationship between social choice and argumentation
theory. The results are also relevant for choosing an appropriate aggregation
operator given the criteria that are considered more important, as well as the
nature of agents' preferences.
| [
{
"version": "v1",
"created": "Sun, 3 Apr 2016 21:48:37 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Apr 2017 20:02:55 GMT"
}
] | 1,497,916,800,000 | [
[
"Awad",
"Edmond",
""
],
[
"Caminada",
"Martin",
""
],
[
"Pigozzi",
"Gabriella",
""
],
[
"Podlaszewski",
"Mikołaj",
""
],
[
"Rahwan",
"Iyad",
""
]
] |
1604.00799 | Alessandro Artale | Alessandro Artale and Enrico Franconi | Extending DLR with Labelled Tuples, Projections, Functional Dependencies
and Objectification (full version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce an extension of the n-ary description logic DLR to deal with
attribute-labelled tuples (generalising the positional notation), with
arbitrary projections of relations (inclusion dependencies), generic functional
dependencies and with global and local objectification (reifying relations or
their projections). We show how a simple syntactic condition on the appearance
of projections and functional dependencies in a knowledge base makes the
language decidable without increasing the computational complexity of the basic
DLR language.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 10:11:52 GMT"
}
] | 1,459,814,400,000 | [
[
"Artale",
"Alessandro",
""
],
[
"Franconi",
"Enrico",
""
]
] |
1604.00869 | Sundong Kim | Sundong Kim | Automatic Knowledge Base Evolution by Learning Instances | 11 pages, submitted to International Semantic Web Conference 2014
(Rejected), Revising(2016-04-04~) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge base is the way to store structured and unstructured data
throughout the web. Since the size of the web is increasing rapidly, there are
huge needs to structure the knowledge in a fully automated way. However
fully-automated knowledge-base evolution on the Semantic Web is a major
challenges, although there are many ontology evolution techniques available.
Therefore learning ontology automatically can contribute to the semantic web
society significantly. In this paper, we propose full-automated ontology
learning algorithm to generate refined knowledge base from incomplete knowledge
base and rdf-triples. Our algorithm is data-driven approach which is based on
the property of each instance. Ontology class is being elaborated by
generalizing frequent property of its instances. By using that developed class
information, each instance can find its most relatively matching class. By
repeating these two steps, we achieve fully-automated ontology evolution from
incomplete basic knowledge base.
| [
{
"version": "v1",
"created": "Mon, 4 Apr 2016 14:23:25 GMT"
}
] | 1,459,814,400,000 | [
[
"Kim",
"Sundong",
""
]
] |
1604.01277 | Ramon Fraga Pereira | Ramon Fraga Pereira and Felipe Meneguzzi | Landmark-Based Plan Recognition | Accepted as short paper in the 22nd European Conference on Artificial
Intelligence, ECAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognition of goals and plans using incomplete evidence from action
execution can be done efficiently by using planning techniques. In many
applications it is important to recognize goals and plans not only accurately,
but also quickly. In this paper, we develop a heuristic approach for
recognizing plans based on planning techniques that rely on ordering
constraints to filter candidate goals from observations. These ordering
constraints are called landmarks in the planning literature, which are facts or
actions that cannot be avoided to achieve a goal. We show the applicability of
planning landmarks in two settings: first, we use it directly to develop a
heuristic-based plan recognition approach; second, we refine an existing
planning-based plan recognition approach by pre-filtering its candidate goals.
Our empirical evaluation shows that our approach is not only substantially more
accurate than the state-of-the-art in all available datasets, it is also an
order of magnitude faster.
| [
{
"version": "v1",
"created": "Tue, 5 Apr 2016 14:44:03 GMT"
},
{
"version": "v2",
"created": "Tue, 28 Jun 2016 17:56:47 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Feb 2017 01:15:59 GMT"
}
] | 1,486,512,000,000 | [
[
"Pereira",
"Ramon Fraga",
""
],
[
"Meneguzzi",
"Felipe",
""
]
] |
1604.02126 | Gavin Rens | Gavin Rens | On Stochastic Belief Revision and Update and their Combination | Presented at the Sixteenth International Workshop on Non-Monotonic
Reasoning, 22-24 April 2016, Cape Town, South Africa. 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I propose a framework for an agent to change its probabilistic beliefs when a
new piece of propositional information $\alpha$ is observed. Traditionally,
belief change occurs by either a revision process or by an update process,
depending on whether the agent is informed with $\alpha$ in a static world or,
respectively, whether $\alpha$ is a 'signal' from the environment due to an
event occurring. Boutilier suggested a unified model of qualitative belief
change, which "combines aspects of revision and update, providing a more
realistic characterization of belief change." In this paper, I propose a
unified model of quantitative belief change, where an agent's beliefs are
represented as a probability distribution over possible worlds. As does
Boutilier, I take a dynamical systems perspective. The proposed approach is
evaluated against several rationality postulated, and some properties of the
approach are worked out.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 19:28:00 GMT"
}
] | 1,460,073,600,000 | [
[
"Rens",
"Gavin",
""
]
] |
1604.02133 | Gavin Rens | Gavin Rens, Thomas Meyer, Giovanni Casini | Revising Incompletely Specified Convex Probabilistic Belief Bases | Presented at the Sixteenth International Workshop on Non-Monotonic
Reasoning, 22-24 April 2016, Cape Town, South Africa. 9.25 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a method for an agent to revise its incomplete probabilistic
beliefs when a new piece of propositional information is observed. In this
work, an agent's beliefs are represented by a set of probabilistic formulae --
a belief base. The method involves determining a representative set of
'boundary' probability distributions consistent with the current belief base,
revising each of these probability distributions and then translating the
revised information into a new belief base. We use a version of Lewis Imaging
as the revision operation. The correctness of the approach is proved. The
expressivity of the belief bases under consideration are rather restricted, but
has some applications. We also discuss methods of belief base revision
employing the notion of optimum entropy, and point out some of the benefits and
difficulties in those methods. Both the boundary distribution method and the
optimum entropy method are reasonable, yet yield different results.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 19:41:35 GMT"
}
] | 1,460,073,600,000 | [
[
"Rens",
"Gavin",
""
],
[
"Meyer",
"Thomas",
""
],
[
"Casini",
"Giovanni",
""
]
] |
1604.02323 | Kennedy Ehimwenma | Kennedy E. Ehimwenma, Paul Crowther and Martin Beer | A system of serial computation for classified rules prediction in
non-regular ontology trees | 13 pages, 15 figures, International Journal article, PhD research
work | International Journal of Artificial Intelligence and Applications
(IJAIA) March 2016, Vol 7(2), pp. 21-33 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Objects or structures that are regular take uniform dimensions. Based on the
concepts of regular models, our previous research work has developed a system
of a regular ontology that models learning structures in a multiagent system
for uniform pre-assessments in a learning environment. This regular ontology
has led to the modelling of a classified rules learning algorithm that predicts
the actual number of rules needed for inductive learning processes and decision
making in a multiagent system. But not all processes or models are regular.
Thus this paper presents a system of polynomial equation that can estimate and
predict the required number of rules of a non-regular ontology model given some
defined parameters.
| [
{
"version": "v1",
"created": "Fri, 8 Apr 2016 12:12:17 GMT"
}
] | 1,460,332,800,000 | [
[
"Ehimwenma",
"Kennedy E.",
""
],
[
"Crowther",
"Paul",
""
],
[
"Beer",
"Martin",
""
]
] |
1604.02509 | Shiwali Mohan | Shiwali Mohan, Aaron Mininger, John Laird | Towards an Indexical Model of Situated Language Comprehension for
Cognitive Agents in Physical Worlds | null | Advances in Cognitive Systems 3 (2014) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a computational model of situated language comprehension based on
the Indexical Hypothesis that generates meaning representations by translating
amodal linguistic symbols to modal representations of beliefs, knowledge, and
experience external to the linguistic system. This Indexical Model incorporates
multiple information sources, including perceptions, domain knowledge, and
short-term and long-term experiences during comprehension. We show that
exploiting diverse information sources can alleviate ambiguities that arise
from contextual use of underspecific referring expressions and unexpressed
argument alternations of verbs. The model is being used to support linguistic
interactions in Rosie, an agent implemented in Soar that learns from
instruction.
| [
{
"version": "v1",
"created": "Sat, 9 Apr 2016 01:57:13 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Oct 2022 05:46:43 GMT"
}
] | 1,666,224,000,000 | [
[
"Mohan",
"Shiwali",
""
],
[
"Mininger",
"Aaron",
""
],
[
"Laird",
"John",
""
]
] |
1604.02780 | Carlos Leandro | Carlos Leandro | Knowledge Extraction and Knowledge Integration governed by
{\L}ukasiewicz Logics | 38 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of machine learning in particular and artificial intelligent
in general has been strongly conditioned by the lack of an appropriate
interface layer between deduction, abduction and induction. In this work we
extend traditional algebraic specification methods in this direction. Here we
assume that such interface for AI emerges from an adequate Neural-Symbolic
integration. This integration is made for universe of discourse described on a
Topos governed by a many-valued {\L}ukasiewicz logic. Sentences are integrated
in a symbolic knowledge base describing the problem domain, codified using a
graphic-based language, wherein every logic connective is defined by a neuron
in an artificial network. This allows the integration of first-order formulas
into a network architecture as background knowledge, and simplifies symbolic
rule extraction from trained networks. For the train of such neural networks we
changed the Levenderg-Marquardt algorithm, restricting the knowledge
dissemination in the network structure using soft crystallization. This
procedure reduces neural network plasticity without drastically damaging the
learning performance, allowing the emergence of symbolic patterns. This makes
the descriptive power of produced neural networks similar to the descriptive
power of {\L}ukasiewicz logic language, reducing the information lost on
translation between symbolic and connectionist structures. We tested this
method on the extraction of knowledge from specified structures. For it, we
present the notion of fuzzy state automata, and we use automata behaviour to
infer its structure. We use this type of automata on the generation of models
for relations specified as symbolic background knowledge.
| [
{
"version": "v1",
"created": "Mon, 11 Apr 2016 03:23:21 GMT"
}
] | 1,460,419,200,000 | [
[
"Leandro",
"Carlos",
""
]
] |
1604.03210 | Son-Il Kwak | Kwak Son Il | An Analysis of General Fuzzy Logic and Fuzzy Reasoning Method | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, we describe the fuzzy logic, fuzzy language and algorithms
as the basis of fuzzy reasoning, one of the intelligent information processing
method, and then describe the general fuzzy reasoning method.
| [
{
"version": "v1",
"created": "Thu, 7 Apr 2016 13:05:10 GMT"
}
] | 1,460,505,600,000 | [
[
"Il",
"Kwak Son",
""
]
] |
1604.04096 | Mauro Vallati | Valerio Velardo and Mauro Vallati | A General Framework for Describing Creative Agents | 14 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational creativity is a subfield of AI focused on developing and
studying creative systems. Few academic studies analysing the behaviour of
creative agents from a theoretical viewpoint have been proposed. The proposed
frameworks are vague and hard to exploit; moreover, such works are focused on a
notion of creativity tailored for humans.
In this paper we introduce General Creativity, which extends that traditional
notion. General Creativity provides the basis for a formalised theoretical
framework, that allows one to univocally describe any creative agent, and their
behaviour within societies of creative systems. Given the growing number of AI
creative systems developed over recent years, it is of fundamental importance
to understand how they could influence each other as well as how to gauge their
impact on human society. In particular, in this paper we exploit the proposed
framework for (i) identifying different forms of creativity; (ii) describing
some typical creative agents behaviour, and (iii) analysing the dynamics of
societies in which both human and non-human creative systems coexist.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2016 10:13:43 GMT"
}
] | 1,460,678,400,000 | [
[
"Velardo",
"Valerio",
""
],
[
"Vallati",
"Mauro",
""
]
] |
1604.04315 | Carissa Schoenick | Carissa Schoenick, Peter Clark, Oyvind Tafjord, Peter Turney, Oren
Etzioni | Moving Beyond the Turing Test with the Allen AI Science Challenge | 7 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given recent successes in AI (e.g., AlphaGo's victory against Lee Sedol in
the game of GO), it's become increasingly important to assess: how close are AI
systems to human-level intelligence? This paper describes the Allen AI Science
Challenge---an approach towards that goal which led to a unique Kaggle
Competition, its results, the lessons learned, and our next steps.
| [
{
"version": "v1",
"created": "Thu, 14 Apr 2016 22:43:30 GMT"
},
{
"version": "v2",
"created": "Tue, 17 May 2016 18:47:00 GMT"
},
{
"version": "v3",
"created": "Wed, 22 Feb 2017 20:02:46 GMT"
}
] | 1,487,894,400,000 | [
[
"Schoenick",
"Carissa",
""
],
[
"Clark",
"Peter",
""
],
[
"Tafjord",
"Oyvind",
""
],
[
"Turney",
"Peter",
""
],
[
"Etzioni",
"Oren",
""
]
] |
1604.04506 | Paolo Pareti Mr. | Paolo Pareti, Benoit Testu, Ryutaro Ichise, Ewan Klein, Adam Barker | Integrating Know-How into the Linked Data Cloud | The 19th International Conference on Knowledge Engineering and
Knowledge Management (EKAW 2014), 24-28 November 2014, Link\"oping, Sweden | Knowledge Engineering and Knowledge Management, volume 8876 of
Lecture Notes in Computer Science, pages 385-396. Springer International
Publishing (2014) | 10.1007/978-3-319-13704-9_30 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents the first framework for integrating procedural knowledge,
or "know-how", into the Linked Data Cloud. Know-how available on the Web, such
as step-by-step instructions, is largely unstructured and isolated from other
sources of online knowledge. To overcome these limitations, we propose
extending to procedural knowledge the benefits that Linked Data has already
brought to representing, retrieving and reusing declarative knowledge. We
describe a framework for representing generic know-how as Linked Data and for
automatically acquiring this representation from existing resources on the Web.
This system also allows the automatic generation of links between different
know-how resources, and between those resources and other online knowledge
bases, such as DBpedia. We discuss the results of applying this framework to a
real-world scenario and we show how it outperforms existing manual
community-driven integration efforts.
| [
{
"version": "v1",
"created": "Fri, 15 Apr 2016 13:52:12 GMT"
}
] | 1,460,937,600,000 | [
[
"Pareti",
"Paolo",
""
],
[
"Testu",
"Benoit",
""
],
[
"Ichise",
"Ryutaro",
""
],
[
"Klein",
"Ewan",
""
],
[
"Barker",
"Adam",
""
]
] |
1604.04660 | Jordi Bieger | Kristinn R. Th\'orisson and Jordi Bieger and Thr\"ostur Thorarensen
and J\'ona S. Sigur{\dh}ard\'ottir, and Bas R. Steunebrink | Why Artificial Intelligence Needs a Task Theory --- And What It Might
Look Like | accepted to the Ninth Conference on Artificial General Intelligence
(AGI-16) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of "task" is at the core of artificial intelligence (AI): Tasks
are used for training and evaluating AI systems, which are built in order to
perform and automatize tasks we deem useful. In other fields of engineering
theoretical foundations allow thorough evaluation of designs by methodical
manipulation of well understood parameters with a known role and importance;
this allows an aeronautics engineer, for instance, to systematically assess the
effects of wind speed on an airplane's performance and stability. No framework
exists in AI that allows this kind of methodical manipulation: Performance
results on the few tasks in current use (cf. board games, question-answering)
cannot be easily compared, however similar or different. The issue is even more
acute with respect to artificial *general* intelligence systems, which must
handle unanticipated tasks whose specifics cannot be known beforehand. A *task
theory* would enable addressing tasks at the *class* level, bypassing their
specifics, providing the appropriate formalization and classification of tasks,
environments, and their parameters, resulting in more rigorous ways of
measuring, comparing, and evaluating intelligent behavior. Even modest
improvements in this direction would surpass the current ad-hoc nature of
machine learning and AI evaluation. Here we discuss the main elements of the
argument for a task theory and present an outline of what it might look like
for physical tasks.
| [
{
"version": "v1",
"created": "Fri, 15 Apr 2016 23:36:44 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2016 17:16:25 GMT"
}
] | 1,463,097,600,000 | [
[
"Thórisson",
"Kristinn R.",
""
],
[
"Bieger",
"Jordi",
""
],
[
"Thorarensen",
"Thröstur",
""
],
[
"Sigurðardóttir",
"Jóna S.",
""
],
[
"Steunebrink",
"Bas R.",
""
]
] |
1604.04795 | Jacopo Urbani | Jacopo Urbani, Sourav Dutta, Sairam Gurajada, Gerhard Weikum | KOGNAC: Efficient Encoding of Large Knowledge Graphs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many Web applications require efficient querying of large Knowledge Graphs
(KGs). We propose KOGNAC, a dictionary-encoding algorithm designed to improve
SPARQL querying with a judicious combination of statistical and semantic
techniques. In KOGNAC, frequent terms are detected with a frequency
approximation algorithm and encoded to maximise compression. Infrequent terms
are semantically grouped into ontological classes and encoded to increase data
locality. We evaluated KOGNAC in combination with state-of-the-art RDF engines,
and observed that it significantly improves SPARQL querying on KGs with up to
1B edges.
| [
{
"version": "v1",
"created": "Sat, 16 Apr 2016 20:54:12 GMT"
},
{
"version": "v2",
"created": "Sun, 10 Jul 2016 16:24:37 GMT"
}
] | 1,468,281,600,000 | [
[
"Urbani",
"Jacopo",
""
],
[
"Dutta",
"Sourav",
""
],
[
"Gurajada",
"Sairam",
""
],
[
"Weikum",
"Gerhard",
""
]
] |
1604.05273 | Ondrej Kuzelka | Ondrej Kuzelka, Jesse Davis, Steven Schockaert | Learning Possibilistic Logic Theories from Default Rules | Long version of a paper accepted at IJCAI 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a setting for learning possibilistic logic theories from
defaults of the form "if alpha then typically beta". We first analyse this
problem from the point of view of machine learning theory, determining the VC
dimension of possibilistic stratifications as well as the complexity of the
associated learning problems, after which we present a heuristic learning
algorithm that can easily scale to thousands of defaults. An important property
of our approach is that it is inherently able to handle noisy and conflicting
sets of defaults. Among others, this allows us to learn possibilistic logic
theories from crowdsourced data and to approximate propositional Markov logic
networks using heuristic MAP solvers. We present experimental results that
demonstrate the effectiveness of this approach.
| [
{
"version": "v1",
"created": "Mon, 18 Apr 2016 18:35:38 GMT"
}
] | 1,461,024,000,000 | [
[
"Kuzelka",
"Ondrej",
""
],
[
"Davis",
"Jesse",
""
],
[
"Schockaert",
"Steven",
""
]
] |
1604.05419 | Jake Chandler | Jake Chandler and Richard Booth | Extending the Harper Identity to Iterated Belief Change | Extended version of a paper accepted to IJCAI16. 23 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of iterated belief change has focused mainly on revision, with the
other main operator of AGM belief change theory, i.e. contraction, receiving
relatively little attention. In this paper we extend the Harper Identity from
single-step change to define iterated contraction in terms of iterated
revision. Specifically, just as the Harper Identity provides a recipe for
defining the belief set resulting from contracting A in terms of (i) the
initial belief set and (ii) the belief set resulting from revision by not-A, we
look at ways to define the plausibility ordering over worlds resulting from
contracting A in terms of (iii) the initial plausibility ordering, and (iv) the
plausibility ordering resulting from revision by not-A. After noting that the
most straightforward such extension leads to a trivialisation of the space of
permissible orderings, we provide a family of operators for combining
plausibility orderings that avoid such a result. These operators are
characterised in our domain of interest by a pair of intuitively compelling
properties, which turn out to enable the derivation of a number of iterated
contraction postulates from postulates for iterated revision. We finish by
observing that a salient member of this family allows for the derivation of
counterparts for contraction of some well known iterated revision operators, as
well as for defining new iterated contraction operators.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2016 03:36:20 GMT"
}
] | 1,461,110,400,000 | [
[
"Chandler",
"Jake",
""
],
[
"Booth",
"Richard",
""
]
] |
1604.05535 | J. G. Wolff | J Gerard Wolff | The SP theory of intelligence and the representation and processing of
knowledge in the brain | null | Frontiers in Psychology, 7, 1584, 2016 | 10.3389/fpsyg.2016.01584 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The "SP theory of intelligence", with its realisation in the "SP computer
model", aims to simplify and integrate observations and concepts across
AI-related fields, with information compression as a unifying theme. This paper
describes how abstract structures and processes in the theory may be realised
in terms of neurons, their interconnections, and the transmission of signals
between neurons. This part of the SP theory -- "SP-neural" -- is a tentative
and partial model for the representation and processing of knowledge in the
brain. In the SP theory (apart from SP-neural), all kinds of knowledge are
represented with "patterns", where a pattern is an array of atomic symbols in
one or two dimensions. In SP-neural, the concept of a "pattern" is realised as
an array of neurons called a "pattern assembly", similar to Hebb's concept of a
"cell assembly" but with important differences. Central to the processing of
information in the SP system is the powerful concept of "multiple alignment",
borrowed and adapted from bioinformatics. Processes such as pattern
recognition, reasoning and problem solving are achieved via the building of
multiple alignments, while unsupervised learning -- significantly different
from the "Hebbian" kinds of learning -- is achieved by creating patterns from
sensory information and also by creating patterns from multiple alignments in
which there is a partial match between one pattern and another. Short-lived
neural structures equivalent to multiple alignments will be created via an
inter-play of excitatory and inhibitory neural signals. The paper discusses
several associated issues, with relevant empirical evidence.
| [
{
"version": "v1",
"created": "Tue, 19 Apr 2016 12:04:14 GMT"
},
{
"version": "v2",
"created": "Thu, 12 May 2016 11:29:50 GMT"
}
] | 1,481,500,800,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1604.05557 | Pascal Faudemay | Pascal Faudemay | AGI and Reflexivity | submitted to ECAI-2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a property of intelligent systems, which we call Reflexivity. In
human beings, it is one aspect of consciousness, and an element of
deliberation. We propose a conjecture, that this property is conditioned by a
topological property of the processes which implement this reflexivity. These
processes may be symbolic, or non symbolic e.g. connexionnist. An architecture
which implements reflexivity may be based on the interaction of one or several
modules of deep learning, which may be specialized or not, and interconnected
in a relevant way. A necessary condition of reflexivity is the existence of
recurrence in its processes, we will examine in which cases this condition may
be sufficient. We will then examine how this topology and this property make
possible the expression of a second property, the deliberation. In a final
paragraph, we propose an evaluation of intelligent systems, based on the
fulfillment of all or some of these properties.
| [
{
"version": "v1",
"created": "Fri, 15 Apr 2016 19:39:54 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Apr 2016 18:48:02 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Apr 2016 18:49:12 GMT"
}
] | 1,461,888,000,000 | [
[
"Faudemay",
"Pascal",
""
]
] |
1604.06223 | Yohanes Khosiawan | Yohanes Khosiawan, Young Soo Park, Ilkyeong Moon, Janardhanan Mukund
Nilakantan, Izabela Nielsen | Task scheduling system for UAV operations in indoor environment | null | null | 10.1007/s00521-018-3373-9 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Application of UAV in indoor environment is emerging nowadays due to the
advancements in technology. UAV brings more space-flexibility in an occupied or
hardly-accessible indoor environment, e.g., shop floor of manufacturing
industry, greenhouse, nuclear powerplant. UAV helps in creating an autonomous
manufacturing system by executing tasks with less human intervention in
time-efficient manner. Consequently, a scheduler is one essential component to
be focused on; yet the number of reported studies on UAV scheduling has been
minimal. This work proposes a methodology with a heuristic (based on Earliest
Available Time algorithm) which assigns tasks to UAVs with an objective of
minimizing the makespan. In addition, a quick response towards uncertain events
and a quick creation of new high-quality feasible schedule are needed. Hence,
the proposed heuristic is incorporated with Particle Swarm Optimization (PSO)
algorithm to find a quick near optimal schedule. This proposed methodology is
implemented into a scheduler and tested on a few scales of datasets generated
based on a real flight demonstration. Performance evaluation of scheduler is
discussed in detail and the best solution obtained from a selected set of
parameters is reported.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 09:21:21 GMT"
}
] | 1,520,294,400,000 | [
[
"Khosiawan",
"Yohanes",
""
],
[
"Park",
"Young Soo",
""
],
[
"Moon",
"Ilkyeong",
""
],
[
"Nilakantan",
"Janardhanan Mukund",
""
],
[
"Nielsen",
"Izabela",
""
]
] |
1604.06356 | Marija Slavkovik | Marija Slavkovik and Wojciech Jamroga | Iterative Judgment Aggregation | null | null | 10.3233/978-1-61499-672-9-1528 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Judgment aggregation problems form a class of collective decision-making
problems represented in an abstract way, subsuming some well known problems
such as voting. A collective decision can be reached in many ways, but a direct
one-step aggregation of individual decisions is arguably most studied. Another
way to reach collective decisions is by iterative consensus building --
allowing each decision-maker to change their individual decision in response to
the choices of the other agents until a consensus is reached. Iterative
consensus building has so far only been studied for voting problems. Here we
propose an iterative judgment aggregation algorithm, based on movements in an
undirected graph, and we study for which instances it terminates with a
consensus. We also compare the computational complexity of our iterative
procedure with that of related judgment aggregation operators.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 15:26:02 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Apr 2016 14:08:06 GMT"
},
{
"version": "v3",
"created": "Tue, 12 Jul 2016 12:36:46 GMT"
}
] | 1,472,515,200,000 | [
[
"Slavkovik",
"Marija",
""
],
[
"Jamroga",
"Wojciech",
""
]
] |
1604.06484 | Jean-Charles Regin | Anthony Palmieri and Jean-Charles R\'egin and Pierre Schaus | Parallel Strategies Selection | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of selecting the best variable-value strategy for
solving a given problem in constraint programming. We show that the recent
Embarrassingly Parallel Search method (EPS) can be used for this purpose. EPS
proposes to solve a problem by decomposing it in a lot of subproblems and to
give them on-demand to workers which run in parallel. Our method uses a part of
these subproblems as a simple sample as defined in statistics for comparing
some strategies in order to select the most promising one that will be used for
solving the remaining subproblems. For each subproblem of the sample, the
parallelism helps us to control the running time of the strategies because it
gives us the possibility to introduce timeouts by stopping a strategy when it
requires more than twice the time of the best one. Thus, we can deal with the
great disparity in solving times for the strategies. The selections we made are
based on the Wilcoxon signed rank tests because no assumption has to be made on
the distribution of the solving times and because these tests can deal with the
censored data that we obtain after introducing timeouts. The experiments we
performed on a set of classical benchmarks for satisfaction and optimization
problems show that our method obtain good performance by selecting almost all
the time the best variable-value strategy and by almost never choosing a
variable-value strategy which is dramatically slower than the best one. Our
method also outperforms the portfolio approach consisting in running some
strategies in parallel and is competitive with the multi armed bandit
framework.
| [
{
"version": "v1",
"created": "Thu, 21 Apr 2016 20:40:35 GMT"
}
] | 1,461,542,400,000 | [
[
"Palmieri",
"Anthony",
""
],
[
"Régin",
"Jean-Charles",
""
],
[
"Schaus",
"Pierre",
""
]
] |
1604.06614 | Marija Slavkovik | J\'er\^ome Lang and Marija Slavkovik and Srdjan Vesic | Agenda Separability in Judgment Aggregation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the better studied properties for operators in judgment aggregation is
independence, which essentially dictates that the collective judgment on one
issue should not depend on the individual judgments given on some other
issue(s) in the same agenda. Independence, although considered a desirable
property, is too strong, because together with mild additional conditions it
implies dictatorship. We propose here a weakening of independence, named agenda
separability: a judgment aggregation rule satisfies it if, whenever the agenda
is composed of several independent sub-agendas, the resulting collective
judgment sets can be computed separately for each sub-agenda and then put
together. We show that this property is discriminant, in the sense that among
judgment aggregation rules so far studied in the literature, some satisfy it
and some do not. We briefly discuss the implications of agenda separability on
the computation of judgment aggregation rules.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 11:53:37 GMT"
}
] | 1,461,542,400,000 | [
[
"Lang",
"Jérôme",
""
],
[
"Slavkovik",
"Marija",
""
],
[
"Vesic",
"Srdjan",
""
]
] |
1604.06641 | Pierre Schaus | Jordan Demeulenaere, Renaud Hartert, Christophe Lecoutre, Guillaume
Perez, Laurent Perron, Jean-Charles R\'egin, Pierre Schaus | Compact-Table: Efficiently Filtering Table Constraints with Reversible
Sparse Bit-Sets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we describe Compact-Table (CT), a bitwise algorithm to enforce
Generalized Arc Consistency (GAC) on table con- straints. Although this
algorithm is the default propagator for table constraints in or-tools and
OscaR, two publicly available CP solvers, it has never been described so far.
Importantly, CT has been recently improved further with the introduction of
residues, resetting operations and a data-structure called reversible sparse
bit-set, used to maintain tables of supports (following the idea of tabular
reduction): tuples are invalidated incrementally on value removals by means of
bit-set operations. The experimentation that we have conducted with OscaR shows
that CT outperforms state-of-the-art algorithms STR2, STR3, GAC4R, MDD4R and
AC5-TC on standard benchmarks.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 13:12:38 GMT"
}
] | 1,461,542,400,000 | [
[
"Demeulenaere",
"Jordan",
""
],
[
"Hartert",
"Renaud",
""
],
[
"Lecoutre",
"Christophe",
""
],
[
"Perez",
"Guillaume",
""
],
[
"Perron",
"Laurent",
""
],
[
"Régin",
"Jean-Charles",
""
],
[
"Schaus",
"Pierre",
""
]
] |
1604.06787 | Julien Savaux | Julien Savaux, Julien Vion, Sylvain Piechowiak, Ren\'e Mandiau,
Toshihiro Matsui, Katsutoshi Hirayama, Makoto Yokoo, Shakre Elmane, Marius
Silaghi | Utilitarian Distributed Constraint Optimization Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy has been a major motivation for distributed problem optimization.
However, even though several methods have been proposed to evaluate it, none of
them is widely used. The Distributed Constraint Optimization Problem (DCOP) is
a fundamental model used to approach various families of distributed problems.
As privacy loss does not occur when a solution is accepted, but when it is
proposed, privacy requirements cannot be interpreted as a criteria of the
objective function of the DCOP. Here we approach the problem by letting both
the optimized costs found in DCOPs and the privacy requirements guide the
agents' exploration of the search space. We introduce Utilitarian Distributed
Constraint Optimization Problem (UDCOP) where the costs and the privacy
requirements are used as parameters to a heuristic modifying the search
process. Common stochastic algorithms for decentralized constraint optimization
problems are evaluated here according to how well they preserve privacy.
Further, we propose some extensions where these solvers modify their search
process to take into account their privacy requirements, succeeding in
significantly reducing their privacy loss without significant degradation of
the solution quality.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 19:26:30 GMT"
}
] | 1,461,542,400,000 | [
[
"Savaux",
"Julien",
""
],
[
"Vion",
"Julien",
""
],
[
"Piechowiak",
"Sylvain",
""
],
[
"Mandiau",
"René",
""
],
[
"Matsui",
"Toshihiro",
""
],
[
"Hirayama",
"Katsutoshi",
""
],
[
"Yokoo",
"Makoto",
""
],
[
"Elmane",
"Shakre",
""
],
[
"Silaghi",
"Marius",
""
]
] |
1604.06790 | Julien Savaux | Julien Savaux, Julien Vion, Sylvain Piechowiak, Ren\'e Mandiau,
Toshihiro Matsui, Katsutoshi Hirayama, Makoto Yokoo, Shakre Elmane, Marius
Silaghi | DisCSPs with Privacy Recast as Planning Problems for Utility-based
Agents | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy has traditionally been a major motivation for decentralized problem
solving. However, even though several metrics have been proposed to quantify
it, none of them is easily integrated with common solvers. Constraint
programming is a fundamental paradigm used to approach various families of
problems. We introduce Utilitarian Distributed Constraint Satisfaction Problems
(UDisCSP) where the utility of each state is estimated as the difference
between the the expected rewards for agreements on assignments for shared
variables, and the expected cost of privacy loss. Therefore, a traditional
DisCSP with privacy requirements is viewed as a planning problem. The actions
available to agents are: communication and local inference. Common
decentralized solvers are evaluated here from the point of view of their
interpretation as greedy planners. Further, we investigate some simple
extensions where these solvers start taking into account the utility function.
In these extensions we assume that the planning problem is further restricting
the set of communication actions to only the communication primitives present
in the corresponding solver protocols. The solvers obtained for the new type of
problems propose the action (communication/inference) to be performed in each
situation, defining thereby the policy.
| [
{
"version": "v1",
"created": "Fri, 22 Apr 2016 19:35:49 GMT"
}
] | 1,461,542,400,000 | [
[
"Savaux",
"Julien",
""
],
[
"Vion",
"Julien",
""
],
[
"Piechowiak",
"Sylvain",
""
],
[
"Mandiau",
"René",
""
],
[
"Matsui",
"Toshihiro",
""
],
[
"Hirayama",
"Katsutoshi",
""
],
[
"Yokoo",
"Makoto",
""
],
[
"Elmane",
"Shakre",
""
],
[
"Silaghi",
"Marius",
""
]
] |
1604.06954 | Santiago Ontanon | Santiago Onta\~n\'on | RHOG: A Refinement-Operator Library for Directed Labeled Graphs | Report of the theory behind the RHOG library developed under NSF
EAGER grant IIS-1551338 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This document provides the foundations behind the functionality provided by
the $\rho$G library (https://github.com/santiontanon/RHOG), focusing on the
basic operations the library provides: subsumption, refinement of directed
labeled graphs, and distance/similarity assessment between directed labeled
graphs. $\rho$G development was initially supported by the National Science
Foundation, by the EAGER grant IIS-1551338.
| [
{
"version": "v1",
"created": "Sat, 23 Apr 2016 21:03:45 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Apr 2020 23:39:45 GMT"
}
] | 1,587,427,200,000 | [
[
"Ontañón",
"Santiago",
""
]
] |
1604.06963 | David Jilk | David J. Jilk | Limits to Verification and Validation of Agentic Behavior | 13 pages, 0 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Verification and validation of agentic behavior have been suggested as
important research priorities in efforts to reduce risks associated with the
creation of general artificial intelligence (Russell et al 2015). In this paper
we question the appropriateness of using language of certainty with respect to
efforts to manage that risk. We begin by establishing a very general formalism
to characterize agentic behavior and to describe standards of acceptable
behavior. We show that determination of whether an agent meets any particular
standard is not computable. We discuss the extent of the burden associated with
verification by manual proof and by automated behavioral governance. We show
that to ensure decidability of the behavioral standard itself, one must further
limit the capabilities of the agent. We then demonstrate that if our concerns
relate to outcomes in the physical world, attempts at validation are futile.
Finally, we show that layered architectures aimed at making these challenges
tractable mistakenly equate intentions with actions or outcomes, thereby
failing to provide any guarantees. We conclude with a discussion of why
language of certainty should be eradicated from the conversation about the
safety of general artificial intelligence.
| [
{
"version": "v1",
"created": "Sat, 23 Apr 2016 23:01:29 GMT"
},
{
"version": "v2",
"created": "Mon, 10 Oct 2016 22:21:25 GMT"
}
] | 1,476,230,400,000 | [
[
"Jilk",
"David J.",
""
]
] |
1604.07095 | Xiaoxiao Guo | Xiaoxiao Guo, Satinder Singh, Richard Lewis and Honglak Lee | Deep Learning for Reward Design to Improve Monte Carlo Tree Search in
ATARI Games | In 25th International Joint Conference on Artificial Intelligence
(IJCAI), 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Monte Carlo Tree Search (MCTS) methods have proven powerful in planning for
sequential decision-making problems such as Go and video games, but their
performance can be poor when the planning depth and sampling trajectories are
limited or when the rewards are sparse. We present an adaptation of PGRD
(policy-gradient for reward-design) for learning a reward-bonus function to
improve UCT (a MCTS algorithm). Unlike previous applications of PGRD in which
the space of reward-bonus functions was limited to linear functions of
hand-coded state-action-features, we use PGRD with a multi-layer convolutional
neural network to automatically learn features from raw perception as well as
to adapt the non-linear reward-bonus function parameters. We also adopt a
variance-reducing gradient method to improve PGRD's performance. The new method
improves UCT's performance on multiple ATARI games compared to UCT without the
reward bonus. Combining PGRD and Deep Learning in this way should make adapting
rewards for MCTS algorithms far more widely and practically applicable than
before.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2016 23:51:18 GMT"
}
] | 1,461,628,800,000 | [
[
"Guo",
"Xiaoxiao",
""
],
[
"Singh",
"Satinder",
""
],
[
"Lewis",
"Richard",
""
],
[
"Lee",
"Honglak",
""
]
] |
1604.07097 | Kenneth Young | Kenny Young, Ryan Hayward, Gautham Vasan | Neurohex: A Deep Q-learning Hex Agent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DeepMind's recent spectacular success in using deep convolutional neural nets
and machine learning to build superhuman level agents --- e.g. for Atari games
via deep Q-learning and for the game of Go via Reinforcement Learning ---
raises many questions, including to what extent these methods will succeed in
other domains. In this paper we consider DQL for the game of Hex: after
supervised initialization, we use selfplay to train NeuroHex, an 11-layer CNN
that plays Hex on the 13x13 board. Hex is the classic two-player alternate-turn
stone placement game played on a rhombus of hexagonal cells in which the winner
is whomever connects their two opposing sides. Despite the large action and
state space, our system trains a Q-network capable of strong play with no
search. After two weeks of Q-learning, NeuroHex achieves win-rates of 20.4% as
first player and 2.1% as second player against a 1-second/move version of
MoHex, the current ICGA Olympiad Hex champion. Our data suggests further
improvement might be possible with more training time.
| [
{
"version": "v1",
"created": "Sun, 24 Apr 2016 23:56:37 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2016 02:26:14 GMT"
}
] | 1,461,715,200,000 | [
[
"Young",
"Kenny",
""
],
[
"Hayward",
"Ryan",
""
],
[
"Vasan",
"Gautham",
""
]
] |
1604.07183 | Marc van Zee | Marc van Zee and Dragan Doder | AGM-Style Revision of Beliefs and Intentions from a Database Perspective
(Preliminary Version) | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a logic for temporal beliefs and intentions based on Shoham's
database perspective. We separate strong beliefs from weak beliefs. Strong
beliefs are independent from intentions, while weak beliefs are obtained by
adding intentions to strong beliefs and everything that follows from that. We
formalize coherence conditions on strong beliefs and intentions. We provide
AGM-style postulates for the revision of strong beliefs and intentions. We show
in a representation theorem that a revision operator satisfying our postulates
can be represented by a pre-order on interpretations of the beliefs, together
with a selection function for the intentions.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 09:44:02 GMT"
},
{
"version": "v2",
"created": "Tue, 26 Apr 2016 16:10:38 GMT"
}
] | 1,461,715,200,000 | [
[
"van Zee",
"Marc",
""
],
[
"Doder",
"Dragan",
""
]
] |
1604.07312 | Jan N. van Rijn | Jan N. van Rijn, Jonathan K. Vis | Endgame Analysis of Dou Shou Qi | 5 pages, ICGA Journal, Vol. 37, pp. 120-124, 2014 | ICGA Journal, Vol. 37, pp. 120-124, 2014 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dou Shou Qi is a game in which two players control a number of pieces, each
of them aiming to move one of their pieces onto a given square. We implemented
an engine for analyzing the game. Moreover, we created a series of endgame
tablebases containing all configurations with up to four pieces. These
tablebases are the first steps towards theoretically solving the game. Finally,
we constructed decision trees based on the endgame tablebases. In this note we
report on some interesting patterns.
| [
{
"version": "v1",
"created": "Mon, 25 Apr 2016 15:35:38 GMT"
}
] | 1,461,628,800,000 | [
[
"van Rijn",
"Jan N.",
""
],
[
"Vis",
"Jonathan K.",
""
]
] |
1604.07625 | Olegs Verhodubs | Olegs Verhodubs | Mutual Transformation of Information and Knowledge | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information and knowledge are transformable into each other. Information
transformation into knowledge by the example of rule generation from OWL (Web
Ontology Language) ontology has been shown during the development of the SWES
(Semantic Web Expert System). The SWES is expected as an expert system for
searching OWL ontologies from the Web, generating rules from the found
ontologies and supplementing the SWES knowledge base with these rules. The
purpose of this paper is to show knowledge transformation into information by
the example of ontology generation from rules.
| [
{
"version": "v1",
"created": "Tue, 26 Apr 2016 11:31:02 GMT"
}
] | 1,461,715,200,000 | [
[
"Verhodubs",
"Olegs",
""
]
] |
1604.08055 | Martin Suda | Giles Reger, Martin Suda, Andrei Voronkov, Krystof Hoder | Selecting the Selection | IJCAR 2016 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern saturation-based Automated Theorem Provers typically implement the
superposition calculus for reasoning about first-order logic with or without
equality. Practical implementations of this calculus use a variety of literal
selections and term orderings to tame the growth of the search space and help
steer proof search. This paper introduces the notion of lookahead selection
that estimates (looks ahead) the effect on the search space of selecting a
literal. There is also a case made for the use of incomplete selection
functions that attempt to restrict the search space instead of satisfying some
completeness criteria. Experimental evaluation in the \Vampire\ theorem prover
shows that both lookahead selection and incomplete selection significantly
contribute to solving hard problems unsolvable by other methods.
| [
{
"version": "v1",
"created": "Wed, 27 Apr 2016 13:14:44 GMT"
}
] | 1,461,801,600,000 | [
[
"Reger",
"Giles",
""
],
[
"Suda",
"Martin",
""
],
[
"Voronkov",
"Andrei",
""
],
[
"Hoder",
"Krystof",
""
]
] |
1604.08148 | Changqing Liu | Changqing Liu | Defining Concepts of Emotion: From Philosophy to Science | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is motivated by a series of (related) questions as to whether a
computer can have pleasure and pain, what pleasure (and intensity of pleasure)
is, and, ultimately, what concepts of emotion are.
To determine what an emotion is, is a matter of conceptualization, namely,
understanding and explicitly encoding the concept of emotion as people use it
in everyday life. This is a notoriously difficult problem (Frijda, 1986, Fehr
\& Russell, 1984). This paper firstly shows why this is a difficult problem by
aligning it with the conceptualization of a few other so called semantic
primitives such as "EXIST", "FORCE", "BIG" (plus "LIMIT"). The definitions of
these thought-to-be-indefinable concepts, given in this paper, show what formal
definitions of concepts look like and how concepts are constructed. As a
by-product, owing to the explicit account of the meaning of "exist", the famous
dispute between Einstein and Bohr is naturally resolved from linguistic point
of view. Secondly, defending Frijda's view that emotion is action tendency (or
Ryle's behavioral disposition (propensity)), we give a list of emotions defined
in terms of action tendency. In particular, the definitions of pleasure and the
feeling of beauty are presented.
Further, we give a formal definition of "action tendency", from which the
concept of "intensity" of emotions (including pleasure) is naturally derived in
a formal fashion. The meanings of "wish", "wait", "good", "hot" are analyzed.
| [
{
"version": "v1",
"created": "Thu, 11 Feb 2016 22:51:06 GMT"
}
] | 1,461,801,600,000 | [
[
"Liu",
"Changqing",
""
]
] |
1605.00495 | Ryuta Arisaka | Ryuta Arisaka and Ken Satoh | Coalition Formability Semantics with Conflict-Eliminable Sets of
Arguments | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider abstract-argumentation-theoretic coalition formability in this
work. Taking a model from political alliance among political parties, we will
contemplate profitability, and then formability, of a coalition. As is commonly
understood, a group forms a coalition with another group for a greater good,
the goodness measured against some criteria. As is also commonly understood,
however, a coalition may deliver benefits to a group X at the sacrifice of
something that X was able to do before coalition formation, which X may be no
longer able to do under the coalition. Use of the typical conflict-free sets of
arguments is not very fitting for accommodating this aspect of coalition, which
prompts us to turn to a weaker notion, conflict-eliminability, as a property
that a set of arguments should primarily satisfy. We require numerical
quantification of attack strengths as well as of argument strengths for its
characterisation. We will first analyse semantics of profitability of a given
conflict-eliminable set forming a coalition with another conflict-eliminable
set, and will then provide four coalition formability semantics, each of which
formalises certain utility postulate(s) taking the coalition profitability into
account.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 14:08:23 GMT"
},
{
"version": "v2",
"created": "Sun, 21 May 2017 23:46:45 GMT"
}
] | 1,495,497,600,000 | [
[
"Arisaka",
"Ryuta",
""
],
[
"Satoh",
"Ken",
""
]
] |
1605.00702 | F\'abio Cruz | F\'abio Cruz, Anand Subramanian, Bruno P. Bruck, Manuel Iori | A heuristic algorithm for a single vehicle static bike sharing
rebalancing problem | Technical report Universidade Federal da Para\'iba-UFPB, Brazil | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The static bike rebalancing problem (SBRP) concerns the task of repositioning
bikes among stations in self-service bike-sharing systems. This problem can be
seen as a variant of the one-commodity pickup and delivery vehicle routing
problem, where multiple visits are allowed to be performed at each station,
i.e., the demand of a station is allowed to be split. Moreover, a vehicle may
temporarily drop its load at a station, leaving it in excess or, alternatively,
collect more bikes from a station (even all of them), thus leaving it in
default. Both cases require further visits in order to meet the actual demands
of such station. This paper deals with a particular case of the SBRP, in which
only a single vehicle is available and the objective is to find a least-cost
route that meets the demand of all stations and does not violate the minimum
(zero) and maximum (vehicle capacity) load limits along the tour. Therefore,
the number of bikes to be collected or delivered at each station should be
appropriately determined in order to respect such constraints. We propose an
iterated local search (ILS) based heuristic to solve the problem. The ILS
algorithm was tested on 980 benchmark instances from the literature and the
results obtained are quite competitive when compared to other existing methods.
Moreover, our heuristic was capable of finding most of the known optimal
solutions and also of improving the results on a number of open instances.
| [
{
"version": "v1",
"created": "Mon, 2 May 2016 22:44:54 GMT"
},
{
"version": "v2",
"created": "Wed, 4 May 2016 00:26:14 GMT"
}
] | 1,462,406,400,000 | [
[
"Cruz",
"Fábio",
""
],
[
"Subramanian",
"Anand",
""
],
[
"Bruck",
"Bruno P.",
""
],
[
"Iori",
"Manuel",
""
]
] |
1605.01180 | Alexey Potapov | Alexey Potapov | A Step from Probabilistic Programming to Cognitive Architectures | 4 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic programming is considered as a framework, in which basic
components of cognitive architectures can be represented in unified and elegant
fashion. At the same time, necessity of adopting some component of cognitive
architectures for extending capabilities of probabilistic programming languages
is pointed out. In particular, implicit specification of generative models via
declaration of concepts and links between them is proposed, and usefulness of
declarative knowledge for achieving efficient inference is briefly discussed.
| [
{
"version": "v1",
"created": "Wed, 4 May 2016 08:34:17 GMT"
}
] | 1,462,406,400,000 | [
[
"Potapov",
"Alexey",
""
]
] |
1605.01534 | Mohit Yadav | Mohit Yadav, Pankaj Malhotra, Lovekesh Vig, K Sriram, and Gautam
Shroff | ODE - Augmented Training Improves Anomaly Detection in Sensor Data from
Machines | Published at NIPS Time-series Workshop - 2015 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machines of all kinds from vehicles to industrial equipment are increasingly
instrumented with hundreds of sensors. Using such data to detect anomalous
behaviour is critical for safety and efficient maintenance. However, anomalies
occur rarely and with great variety in such systems, so there is often
insufficient anomalous data to build reliable detectors. A standard approach to
mitigate this problem is to use one class methods relying only on data from
normal behaviour. Unfortunately, even these approaches are more likely to fail
in the scenario of a dynamical system with manual control input(s). Normal
behaviour in response to novel control input(s) might look very different to
the learned detector which may be incorrectly detected as anomalous. In this
paper, we address this issue by modelling time-series via Ordinary Differential
Equations (ODE) and utilising such an ODE model to simulate the behaviour of
dynamical systems under varying control inputs. The available data is then
augmented with data generated from the ODE, and the anomaly detector is
retrained on this augmented dataset. Experiments demonstrate that ODE-augmented
training data allows better coverage of possible control input(s) and results
in learning more accurate distinctions between normal and anomalous behaviour
in time-series.
| [
{
"version": "v1",
"created": "Thu, 5 May 2016 09:15:55 GMT"
}
] | 1,462,492,800,000 | [
[
"Yadav",
"Mohit",
""
],
[
"Malhotra",
"Pankaj",
""
],
[
"Vig",
"Lovekesh",
""
],
[
"Sriram",
"K",
""
],
[
"Shroff",
"Gautam",
""
]
] |
1605.02160 | Paolo Liberatore | Paolo Liberatore | Belief Merging by Source Reliability Assessment | null | null | 10.1613/jair.1.11238 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Merging beliefs requires the plausibility of the sources of the information
to be merged. They are typically assumed equally reliable in lack of hints
indicating otherwise; yet, a recent line of research spun from the idea of
deriving this information from the revision process itself. In particular, the
history of previous revisions and previous merging examples provide information
for performing subsequent mergings.
Yet, no examples or previous revisions may be available. In spite of the
apparent lack of information, something can still be inferred by a
try-and-check approach: a relative reliability ordering is assumed, the merging
process is performed based on it, and the result is compared with the original
information. The outcome of this check may be incoherent with the initial
assumption, like when a completely reliable source is rejected some of the
information it provided. In such cases, the reliability ordering assumed in the
first place can be excluded from consideration. The first theorem of this
article proves that such a scenario is indeed possible. Other results are
obtained under various definition of reliability and merging.
| [
{
"version": "v1",
"created": "Sat, 7 May 2016 09:09:08 GMT"
}
] | 1,616,457,600,000 | [
[
"Liberatore",
"Paolo",
""
]
] |
1605.02321 | Yun-Ching Liu | Yun-Ching Liu and Yoshimasa Tsuruoka | Asymmetric Move Selection Strategies in Monte-Carlo Tree Search:
Minimizing the Simple Regret at Max Nodes | submitted to the 2016 IEEE Computational Intelligence and Games
Conference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The combination of multi-armed bandit (MAB) algorithms with Monte-Carlo tree
search (MCTS) has made a significant impact in various research fields. The UCT
algorithm, which combines the UCB bandit algorithm with MCTS, is a good example
of the success of this combination. The recent breakthrough made by AlphaGo,
which incorporates convolutional neural networks with bandit algorithms in
MCTS, also highlights the necessity of bandit algorithms in MCTS. However,
despite the various investigations carried out on MCTS, nearly all of them
still follow the paradigm of treating every node as an independent instance of
the MAB problem, and applying the same bandit algorithm and heuristics on every
node. As a result, this paradigm may leave some properties of the game tree
unexploited. In this work, we propose that max nodes and min nodes have
different concerns regarding their value estimation, and different bandit
algorithms should be applied accordingly. We develop the Asymmetric-MCTS
algorithm, which is an MCTS variant that applies a simple regret algorithm on
max nodes, and the UCB algorithm on min nodes. We will demonstrate the
performance of the Asymmetric-MCTS algorithm on the game of $9\times 9$ Go,
$9\times 9$ NoGo, and Othello.
| [
{
"version": "v1",
"created": "Sun, 8 May 2016 13:52:41 GMT"
}
] | 1,462,838,400,000 | [
[
"Liu",
"Yun-Ching",
""
],
[
"Tsuruoka",
"Yoshimasa",
""
]
] |
1605.02817 | Roman Yampolskiy | Federico Pistono, Roman V. Yampolskiy | Unethical Research: How to Create a Malevolent Artificial Intelligence | null | In proceedings of Ethics for Artificial Intelligence Workshop
(AI-Ethics-2016). Pages 1-7. New York, NY. July 9 -- 15, 2016 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cybersecurity research involves publishing papers about malicious exploits as
much as publishing information on how to design tools to protect
cyber-infrastructure. It is this information exchange between ethical hackers
and security experts, which results in a well-balanced cyber-ecosystem. In the
blooming domain of AI Safety Engineering, hundreds of papers have been
published on different proposals geared at the creation of a safe machine, yet
nothing, to our knowledge, has been published on how to design a malevolent
machine. Availability of such information would be of great value particularly
to computer scientists, mathematicians, and others who have an interest in AI
safety, and who are attempting to avoid the spontaneous emergence or the
deliberate creation of a dangerous AI, which can negatively affect human
activities and in the worst case cause the complete obliteration of the human
species. This paper provides some general guidelines for the creation of a
Malevolent Artificial Intelligence (MAI).
| [
{
"version": "v1",
"created": "Tue, 10 May 2016 01:39:38 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Sep 2016 18:29:13 GMT"
}
] | 1,496,707,200,000 | [
[
"Pistono",
"Federico",
""
],
[
"Yampolskiy",
"Roman V.",
""
]
] |
1605.02929 | Francesc Serratosa | Francesc Serratosa | Function-Described Graphs for Structural Pattern Recognition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present in this article the model Function-described graph (FDG), which is
a type of compact representation of a set of attributed graphs (AGs) that
borrow from Random Graphs the capability of probabilistic modelling of
structural and attribute information. We define the FDGs, their features and
two distance measures between AGs (unclassified patterns) and FDGs (models or
classes) and we also explain an efficient matching algorithm. Two applications
of FDGs are presented: in the former, FDGs are used for modelling and matching
3D-objects described by multiple views, whereas in the latter, they are used
for representing and recognising human faces, described also by several views.
| [
{
"version": "v1",
"created": "Tue, 10 May 2016 10:30:06 GMT"
}
] | 1,462,924,800,000 | [
[
"Serratosa",
"Francesc",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.