id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
sequencelengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1801.08287 | Craig Sherstan | Craig Sherstan, Brendan Bennett, Kenny Young, Dylan R. Ashley, Adam
White, Martha White, Richard S. Sutton | Directly Estimating the Variance of the {\lambda}-Return Using
Temporal-Difference Methods | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates estimating the variance of a temporal-difference
learning agent's update target. Most reinforcement learning methods use an
estimate of the value function, which captures how good it is for the agent to
be in a particular state and is mathematically expressed as the expected sum of
discounted future rewards (called the return). These values can be
straightforwardly estimated by averaging batches of returns using Monte Carlo
methods. However, if we wish to update the agent's value estimates during
learning--before terminal outcomes are observed--we must use a different
estimation target called the {\lambda}-return, which truncates the return with
the agent's own estimate of the value function. Temporal difference learning
methods estimate the expected {\lambda}-return for each state, allowing these
methods to update online and incrementally, and in most cases achieve better
generalization error and faster learning than Monte Carlo methods. Naturally
one could attempt to estimate higher-order moments of the {\lambda}-return.
This paper is about estimating the variance of the {\lambda}-return. Prior work
has shown that given estimates of the variance of the {\lambda}-return,
learning systems can be constructed to (1) mitigate risk in action selection,
and (2) automatically adapt the parameters of the learning process itself to
improve performance. Unfortunately, existing methods for estimating the
variance of the {\lambda}-return are complex and not well understood
empirically. We contribute a method for estimating the variance of the
{\lambda}-return directly using policy evaluation methods from reinforcement
learning. Our approach is significantly simpler than prior methods that
independently estimate the second moment of the {\lambda}-return. Empirically
our new approach behaves at least as well as existing approaches, but is
generally more robust.
| [
{
"version": "v1",
"created": "Thu, 25 Jan 2018 06:48:14 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Feb 2018 17:00:05 GMT"
}
] | 1,518,652,800,000 | [
[
"Sherstan",
"Craig",
""
],
[
"Bennett",
"Brendan",
""
],
[
"Young",
"Kenny",
""
],
[
"Ashley",
"Dylan R.",
""
],
[
"White",
"Adam",
""
],
[
"White",
"Martha",
""
],
[
"Sutton",
"Richard S.",
""
]
] |
1801.08295 | Kui Yu | Kui Yu, Lin Liu, Jiuyong Li | Discovering Markov Blanket from Multiple interventional Datasets | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study the problem of discovering the Markov blanket (MB) of
a target variable from multiple interventional datasets. Datasets attained from
interventional experiments contain richer causal information than passively
observed data (observational data) for MB discovery. However, almost all
existing MB discovery methods are designed for finding MBs from a single
observational dataset. To identify MBs from multiple interventional datasets,
we face two challenges: (1) unknown intervention variables; (2) nonidentical
data distributions. To tackle the challenges, we theoretically analyze (a)
under what conditions we can find the correct MB of a target variable, and (b)
under what conditions we can identify the causes of the target variable via
discovering its MB. Based on the theoretical analysis, we propose a new
algorithm for discovering MBs from multiple interventional datasets, and
present the conditions/assumptions which assure the correctness of the
algorithm. To our knowledge, this work is the first to present the theoretical
analyses about the conditions for MB discovery in multiple interventional
datasets and the algorithm to find the MBs in relation to the conditions. Using
benchmark Bayesian networks and real-world datasets, the experiments have
validated the effectiveness and efficiency of the proposed algorithm in the
paper.
| [
{
"version": "v1",
"created": "Thu, 25 Jan 2018 07:34:41 GMT"
}
] | 1,516,924,800,000 | [
[
"Yu",
"Kui",
""
],
[
"Liu",
"Lin",
""
],
[
"Li",
"Jiuyong",
""
]
] |
1801.08365 | Vaishak Belle | Vaishak Belle | Probabilistic Planning by Probabilistic Programming | Article at AAAI-18 Workshop on Planning and Inference | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automated planning is a major topic of research in artificial intelligence,
and enjoys a long and distinguished history. The classical paradigm assumes a
distinguished initial state, comprised of a set of facts, and is defined over a
set of actions which change that state in one way or another. Planning in many
real-world settings, however, is much more involved: an agent's knowledge is
almost never simply a set of facts that are true, and actions that the agent
intends to execute never operate the way they are supposed to. Thus,
probabilistic planning attempts to incorporate stochastic models directly into
the planning process. In this article, we briefly report on probabilistic
planning through the lens of probabilistic programming: a programming paradigm
that aims to ease the specification of structured probability distributions. In
particular, we provide an overview of the features of two systems, HYPE and
ALLEGRO, which emphasise different strengths of probabilistic programming that
are particularly useful for complex modelling issues raised in probabilistic
planning. Among other things, with these systems, one can instantiate planning
problems with growing and shrinking state spaces, discrete and continuous
probability distributions, and non-unique prior distributions in a first-order
setting.
| [
{
"version": "v1",
"created": "Thu, 25 Jan 2018 11:47:42 GMT"
}
] | 1,516,924,800,000 | [
[
"Belle",
"Vaishak",
""
]
] |
1801.08459 | Jihyung Moon | Jihyung Moon, Hyochang Yang, Sungzoon Cho | Finding ReMO (Related Memory Object): A Simple Neural Architecture for
Text based Reasoning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To solve the text-based question and answering task that requires relational
reasoning, it is necessary to memorize a large amount of information and find
out the question relevant information from the memory. Most approaches were
based on external memory and four components proposed by Memory Network. The
distinctive component among them was the way of finding the necessary
information and it contributes to the performance. Recently, a simple but
powerful neural network module for reasoning called Relation Network (RN) has
been introduced. We analyzed RN from the view of Memory Network, and realized
that its MLP component is able to reveal the complicate relation between
question and object pair. Motivated from it, we introduce which uses MLP to
find out relevant information on Memory Network architecture. It shows new
state-of-the-art results in jointly trained bAbI-10k story-based question
answering tasks and bAbI dialog-based question answering tasks.
| [
{
"version": "v1",
"created": "Thu, 25 Jan 2018 15:50:44 GMT"
},
{
"version": "v2",
"created": "Fri, 26 Jan 2018 03:47:53 GMT"
}
] | 1,517,184,000,000 | [
[
"Moon",
"Jihyung",
""
],
[
"Yang",
"Hyochang",
""
],
[
"Cho",
"Sungzoon",
""
]
] |
1801.08641 | Kien Do | Kien Do, Truyen Tran, Svetha Venkatesh | Knowledge Graph Embedding with Multiple Relation Projections | 6 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graphs contain rich relational structures of the world, and thus
complement data-driven machine learning in heterogeneous data. One of the most
effective methods in representing knowledge graphs is to embed symbolic
relations and entities into continuous spaces, where relations are
approximately linear translation between projected images of entities in the
relation space. However, state-of-the-art relation projection methods such as
TransR, TransD or TransSparse do not model the correlation between relations,
and thus are not scalable to complex knowledge graphs with thousands of
relations, both in computational demand and in statistical robustness. To this
end we introduce TransF, a novel translation-based method which mitigates the
burden of relation projection by explicitly modeling the basis subspaces of
projection matrices. As a result, TransF is far more light weight than the
existing projection methods, and is robust when facing a high number of
relations. Experimental results on the canonical link prediction task show that
our proposed model outperforms competing rivals by a large margin and achieves
state-of-the-art performance. Especially, TransF improves by 9%/5% in the
head/tail entity prediction task for N-to-1/1-to-N relations over the best
performing translation-based method.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2018 00:28:51 GMT"
}
] | 1,517,184,000,000 | [
[
"Do",
"Kien",
""
],
[
"Tran",
"Truyen",
""
],
[
"Venkatesh",
"Svetha",
""
]
] |
1801.08650 | Chang-Shing Lee | Chang-Shing Lee, Mei-Hui Wang, Tzong-Xiang Huang, Li-Chung Chen,
Yung-Ching Huang, Sheng-Chi Yang, Chien-Hsun Tseng, Pi-Hsia Hung, and Naoyuki
Kubota | Ontology-based Fuzzy Markup Language Agent for Student and Robot
Co-Learning | This paper is submitted to IEEE WCCI 2018 Conference for review | null | 10.1109/FUZZ-IEEE.2018.8491610 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An intelligent robot agent based on domain ontology, machine learning
mechanism, and Fuzzy Markup Language (FML) for students and robot co-learning
is presented in this paper. The machine-human co-learning model is established
to help various students learn the mathematical concepts based on their
learning ability and performance. Meanwhile, the robot acts as a teacher's
assistant to co-learn with children in the class. The FML-based knowledge base
and rule base are embedded in the robot so that the teachers can get feedback
from the robot on whether students make progress or not. Next, we inferred
students' learning performance based on learning content's difficulty and
students' ability, concentration level, as well as teamwork sprit in the class.
Experimental results show that learning with the robot is helpful for
disadvantaged and below-basic children. Moreover, the accuracy of the
intelligent FML-based agent for student learning is increased after machine
learning mechanism.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2018 02:04:01 GMT"
}
] | 1,555,286,400,000 | [
[
"Lee",
"Chang-Shing",
""
],
[
"Wang",
"Mei-Hui",
""
],
[
"Huang",
"Tzong-Xiang",
""
],
[
"Chen",
"Li-Chung",
""
],
[
"Huang",
"Yung-Ching",
""
],
[
"Yang",
"Sheng-Chi",
""
],
[
"Tseng",
"Chien-Hsun",
""
],
[
"Hung",
"Pi-Hsia",
""
],
[
"Kubota",
"Naoyuki",
""
]
] |
1801.08757 | Gal Dalal | Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin
Paduraru, Yuval Tassa | Safe Exploration in Continuous Action Spaces | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of deploying a reinforcement learning (RL) agent on a
physical system such as a datacenter cooling unit or robot, where critical
constraints must never be violated. We show how to exploit the typically smooth
dynamics of these systems and enable RL algorithms to never violate constraints
during learning. Our technique is to directly add to the policy a safety layer
that analytically solves an action correction formulation per each state. The
novelty of obtaining an elegant closed-form solution is attained due to a
linearized model, learned on past trajectories consisting of arbitrary actions.
This is to mimic the real-world circumstances where data logs were generated
with a behavior policy that is implausible to describe mathematically; such
cases render the known safety-aware off-policy methods inapplicable. We
demonstrate the efficacy of our approach on new representative physics-based
environments, and prevail where reward shaping fails by maintaining zero
constraint violations.
| [
{
"version": "v1",
"created": "Fri, 26 Jan 2018 11:11:18 GMT"
}
] | 1,517,184,000,000 | [
[
"Dalal",
"Gal",
""
],
[
"Dvijotham",
"Krishnamurthy",
""
],
[
"Vecerik",
"Matej",
""
],
[
"Hester",
"Todd",
""
],
[
"Paduraru",
"Cosmin",
""
],
[
"Tassa",
"Yuval",
""
]
] |
1801.09061 | Nick Bassiliades | Nick Bassiliades | SWRL2SPIN: A tool for transforming SWRL rule bases in OWL ontologies to
object-oriented SPIN rules | null | International Journal on Semantic Web and Information Systems,
Vol. 16, Iss. 1, Art. 5, 2020 | 10.4018/IJSWIS.2020010105 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semantic Web Rule Language (SWRL) combines OWL (Web Ontology Language)
ontologies with Horn Logic rules of the Rule Markup Language (RuleML) family.
Being supported by ontology editors, rule engines and ontology reasoners, it
has become a very popular choice for developing rule-based applications on top
of ontologies. However, SWRL is probably not go-ing to become a WWW Consortium
standard, prohibiting industrial acceptance. On the other hand, SPIN (SPARQL
Inferencing Notation) has become a de-facto industry standard to rep-resent
SPARQL rules and constraints on Semantic Web models, building on the widespread
acceptance of SPARQL (SPARQL Protocol and RDF Query Language). In this paper,
we ar-gue that the life of existing SWRL rule-based ontology applications can
be prolonged by con-verting them to SPIN. To this end, we have developed the
SWRL2SPIN tool in Prolog that transforms SWRL rules into SPIN rules,
considering the object-orientation of SPIN, i.e. linking rules to the
appropriate ontology classes and optimizing them, as derived by analysing the
rule conditions.
| [
{
"version": "v1",
"created": "Sat, 27 Jan 2018 09:36:22 GMT"
},
{
"version": "v2",
"created": "Sat, 3 Feb 2018 09:33:33 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Dec 2018 07:31:21 GMT"
},
{
"version": "v4",
"created": "Thu, 8 Dec 2022 08:03:53 GMT"
}
] | 1,670,544,000,000 | [
[
"Bassiliades",
"Nick",
""
]
] |
1801.09317 | Jason Pittman | Jason M. Pittman and Courtney Crosby | A Cyber Science Based Ontology for Artificial General Intelligence
Containment | 12 pages, 4 figures, 3 tables Updated author name | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of artificial general intelligence is considered by many to
be inevitable. What such intelligence does after becoming aware is not so
certain. To that end, research suggests that the likelihood of artificial
general intelligence becoming hostile to humans is significant enough to
warrant inquiry into methods to limit such potential. Thus, containment of
artificial general intelligence is a timely and meaningful research topic.
While there is limited research exploring possible containment strategies, such
work is bounded by the underlying field the strategies draw upon. Accordingly,
we set out to construct an ontology to describe necessary elements in any
future containment technology. Using existing academic literature, we developed
a single domain ontology containing five levels, 32 codes, and 32 associated
descriptors. Further, we constructed ontology diagrams to demonstrate intended
relationships. We then identified humans, AGI, and the cyber world as novel
agent objects necessary for future containment activities. Collectively, the
work addresses three critical gaps: (a) identifying and arranging fundamental
constructs; (b) situating AGI containment within cyber science; and (c)
developing scientific rigor within the field.
| [
{
"version": "v1",
"created": "Sun, 28 Jan 2018 23:45:17 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Aug 2021 09:50:55 GMT"
}
] | 1,627,948,800,000 | [
[
"Pittman",
"Jason M.",
""
],
[
"Crosby",
"Courtney",
""
]
] |
1801.09854 | Tathagata Chakraborti | Tathagata Chakraborti and Subbarao Kambhampati | Algorithms for the Greater Good! On Mental Modeling and Acceptable
Symbiosis in Human-AI Collaboration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective collaboration between humans and AI-based systems requires
effective modeling of the human in the loop, both in terms of the mental state
as well as the physical capabilities of the latter. However, these models can
also open up pathways for manipulating and exploiting the human in the hopes of
achieving some greater good, especially when the intent or values of the AI and
the human are not aligned or when they have an asymmetrical relationship with
respect to knowledge or computation power. In fact, such behavior does not
necessarily require any malicious intent but can rather be borne out of
cooperative scenarios. It is also beyond simple misinterpretation of intents,
as in the case of value alignment problems, and thus can be effectively
engineered if desired. Such techniques already exist and pose several
unresolved ethical and moral questions with regards to the design of autonomy.
In this paper, we illustrate some of these issues in a teaming scenario and
investigate how they are perceived by participants in a thought experiment.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2018 05:23:28 GMT"
}
] | 1,517,356,800,000 | [
[
"Chakraborti",
"Tathagata",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1801.10055 | Blai Bonet | Blai Bonet and Hector Geffner | Features, Projections, and Representation Change for Generalized
Planning | Accepted in IJCAI-18 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalized planning is concerned with the characterization and computation
of plans that solve many instances at once. In the standard formulation, a
generalized plan is a mapping from feature or observation histories into
actions, assuming that the instances share a common pool of features and
actions. This assumption, however, excludes the standard relational planning
domains where actions and objects change across instances. In this work, we
extend the standard formulation of generalized planning to such domains. This
is achieved by projecting the actions over the features, resulting in a common
set of abstract actions which can be tested for soundness and completeness, and
which can be used for generating general policies such as "if the gripper is
empty, pick the clear block above x and place it on the table" that achieve the
goal clear(x) in any Blocksworld instance. In this policy, "pick the clear
block above x" is an abstract action that may represent the action Unstack(a,
b) in one situation and the action Unstack(b, c) in another. Transformations
are also introduced for computing such policies by means of fully observable
non-deterministic (FOND) planners. The value of generalized representations for
learning general policies is also discussed.
| [
{
"version": "v1",
"created": "Tue, 30 Jan 2018 15:32:02 GMT"
},
{
"version": "v2",
"created": "Tue, 15 May 2018 12:12:04 GMT"
},
{
"version": "v3",
"created": "Thu, 31 May 2018 15:43:24 GMT"
},
{
"version": "v4",
"created": "Thu, 14 Jun 2018 13:11:19 GMT"
}
] | 1,529,020,800,000 | [
[
"Bonet",
"Blai",
""
],
[
"Geffner",
"Hector",
""
]
] |
1801.10287 | Ajin Joseph | Ajin George Joseph and Shalabh Bhatnagar | An Incremental Off-policy Search in a Model-free Markov Decision Process
Using a Single Sample Path | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider a modified version of the control problem in a
model free Markov decision process (MDP) setting with large state and action
spaces. The control problem most commonly addressed in the contemporary
literature is to find an optimal policy which maximizes the value function,
i.e., the long run discounted reward of the MDP. The current settings also
assume access to a generative model of the MDP with the hidden premise that
observations of the system behaviour in the form of sample trajectories can be
obtained with ease from the model. In this paper, we consider a modified
version, where the cost function is the expectation of a non-convex function of
the value function without access to the generative model. Rather, we assume
that a sample trajectory generated using a priori chosen behaviour policy is
made available. In this restricted setting, we solve the modified control
problem in its true sense, i.e., to find the best possible policy given this
limited information. We propose a stochastic approximation algorithm based on
the well-known cross entropy method which is data (sample trajectory)
efficient, stable, robust as well as computationally and storage efficient. We
provide a proof of convergence of our algorithm to a policy which is globally
optimal relative to the behaviour policy. We also present experimental results
to corroborate our claims and we demonstrate the superiority of the solution
produced by our algorithm compared to the state-of-the-art algorithms under
appropriately chosen behaviour policy.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 02:53:34 GMT"
}
] | 1,517,443,200,000 | [
[
"Joseph",
"Ajin George",
""
],
[
"Bhatnagar",
"Shalabh",
""
]
] |
1801.10437 | L\^e Nguy\^en Hoang | L\^e Nguy\^en Hoang and Rachid Guerraoui | Deep Learning Works in Practice. But Does it Work in Theory? | 6 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning relies on a very specific kind of neural networks: those
superposing several neural layers. In the last few years, deep learning
achieved major breakthroughs in many tasks such as image analysis, speech
recognition, natural language processing, and so on. Yet, there is no
theoretical explanation of this success. In particular, it is not clear why the
deeper the network, the better it actually performs.
We argue that the explanation is intimately connected to a key feature of the
data collected from our surrounding universe to feed the machine learning
algorithms: large non-parallelizable logical depth. Roughly speaking, we
conjecture that the shortest computational descriptions of the universe are
algorithms with inherently large computation times, even when a large number of
computers are available for parallelization. Interestingly, this conjecture,
combined with the folklore conjecture in theoretical computer science that $ P
\neq NC$, explains the success of deep learning.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 13:12:30 GMT"
}
] | 1,517,443,200,000 | [
[
"Hoang",
"Lê Nguyên",
""
],
[
"Guerraoui",
"Rachid",
""
]
] |
1801.10495 | Stefan L\"udtke | Stefan L\"udtke, Max Schr\"oder, Sebastian Bader, Kristian Kersting,
Thomas Kirste | Lifted Filtering via Exchangeable Decomposition | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a model for exact recursive Bayesian filtering based on lifted
multiset states. Combining multisets with lifting makes it possible to
simultaneously exploit multiple strategies for reducing inference complexity
when compared to list-based grounded state representations. The core idea is to
borrow the concept of Maximally Parallel Multiset Rewriting Systems and to
enhance it by concepts from Rao-Blackwellization and Lifted Inference, giving a
representation of state distributions that enables efficient inference. In
worlds where the random variables that define the system state are exchangeable
-- where the identity of entities does not matter -- it automatically uses a
representation that abstracts from ordering (achieving an exponential reduction
in complexity) -- and it automatically adapts when observations or system
dynamics destroy exchangeability by breaking symmetry.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 15:37:13 GMT"
},
{
"version": "v2",
"created": "Mon, 7 May 2018 06:19:58 GMT"
}
] | 1,525,737,600,000 | [
[
"Lüdtke",
"Stefan",
""
],
[
"Schröder",
"Max",
""
],
[
"Bader",
"Sebastian",
""
],
[
"Kersting",
"Kristian",
""
],
[
"Kirste",
"Thomas",
""
]
] |
1801.10545 | Oscar Duarte | Oscar Duarte and Sandra T\'ellez | A family of OWA operators based on Faulhaber's formulas | 17 pages, 7 figures, 1 table, 1 algorithm | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we develop a new family of Ordered Weighted Averaging (OWA)
operators. Weight vector is obtained from a desired orness of the operator.
Using Faulhaber's formulas we obtain direct and simple expressions for the
weight vector without any iteration loop. With the exception of one weight, the
remaining follow a straight line relation. As a result, a fast and robust
algorithm is developed. The resulting weight vector is suboptimal according
with the Maximum Entropy criterion, but it is very close to the optimal.
Comparisons are done with other procedures.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 16:51:34 GMT"
}
] | 1,517,443,200,000 | [
[
"Duarte",
"Oscar",
""
],
[
"Téllez",
"Sandra",
""
]
] |
1802.00048 | Damien Anderson Mr | Damien Anderson, Matthew Stephenson, Julian Togelius, Christian Salge,
John Levine and Jochen Renz | Deceptive Games | 16 pages, accepted at EvoStar2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deceptive games are games where the reward structure or other aspects of the
game are designed to lead the agent away from a globally optimal policy. While
many games are already deceptive to some extent, we designed a series of games
in the Video Game Description Language (VGDL) implementing specific types of
deception, classified by the cognitive biases they exploit. VGDL games can be
run in the General Video Game Artificial Intelligence (GVGAI) Framework, making
it possible to test a variety of existing AI agents that have been submitted to
the GVGAI Competition on these deceptive games. Our results show that all
tested agents are vulnerable to several kinds of deception, but that different
agents have different weaknesses. This suggests that we can use deception to
understand the capabilities of a game-playing algorithm, and game-playing
algorithms to characterize the deception displayed by a game.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 20:06:05 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Feb 2018 23:12:14 GMT"
}
] | 1,517,875,200,000 | [
[
"Anderson",
"Damien",
""
],
[
"Stephenson",
"Matthew",
""
],
[
"Togelius",
"Julian",
""
],
[
"Salge",
"Christian",
""
],
[
"Levine",
"John",
""
],
[
"Renz",
"Jochen",
""
]
] |
1802.00050 | Lior Friedman Mr | Lior Friedman and Shaul Markovitch | Recursive Feature Generation for Knowledge-based Learning | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | When humans perform inductive learning, they often enhance the process with
background knowledge. With the increasing availability of well-formed
collaborative knowledge bases, the performance of learning algorithms could be
significantly enhanced if a way were found to exploit these knowledge bases. In
this work, we present a novel algorithm for injecting external knowledge into
induction algorithms using feature generation. Given a feature, the algorithm
defines a new learning task over its set of values, and uses the knowledge base
to solve the constructed learning task. The resulting classifier is then used
as a new feature for the original problem. We have applied our algorithm to the
domain of text classification using large semantic knowledge bases. We have
shown that the generated features significantly improve the performance of
existing learning algorithms.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 20:18:36 GMT"
}
] | 1,517,529,600,000 | [
[
"Friedman",
"Lior",
""
],
[
"Markovitch",
"Shaul",
""
]
] |
1802.00295 | Gilles Falquet | Sahar Aljalbout and Gilles Falquet | A Semantic Model for Historical Manuscripts | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study and publication of historical scientific manuscripts are com- plex
tasks that involve, among others, the explicit representation of the text mean-
ings and reasoning on temporal entities. In this paper we present the first
results of an interdisciplinary project dedicated to the study of Saussure's
manuscripts. These results aim to fulfill requirements elaborated with
Saussurean humanists. They comprise a model for the representation of
time-varying statements and time-varying domain knowledge (in particular
terminologies) as well as imple- mentation techniques for the semantic indexing
of manuscripts and for temporal reasoning on knowledge extracted from the
manuscripts.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 12:47:25 GMT"
},
{
"version": "v2",
"created": "Fri, 2 Feb 2018 07:30:17 GMT"
}
] | 1,517,788,800,000 | [
[
"Aljalbout",
"Sahar",
""
],
[
"Falquet",
"Gilles",
""
]
] |
1802.00386 | Leye Wang | Leye Wang, Xu Geng, Xiaojuan Ma, Feng Liu, Qiang Yang | Cross-City Transfer Learning for Deep Spatio-Temporal Prediction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatio-temporal prediction is a key type of tasks in urban computing, e.g.,
traffic flow and air quality. Adequate data is usually a prerequisite,
especially when deep learning is adopted. However, the development levels of
different cities are unbalanced, and still many cities suffer from data
scarcity. To address the problem, we propose a novel cross-city transfer
learning method for deep spatio-temporal prediction tasks, called RegionTrans.
RegionTrans aims to effectively transfer knowledge from a data-rich source city
to a data-scarce target city. More specifically, we first learn an inter-city
region matching function to match each target city region to a similar source
city region. A neural network is designed to effectively extract region-level
representation for spatio-temporal prediction. Finally, an optimization
algorithm is proposed to transfer learned features from the source city to the
target city with the region matching function. Using citywide crowd flow
prediction as a demonstration experiment, we verify the effectiveness of
RegionTrans. Results show that RegionTrans can outperform the state-of-the-art
fine-tuning deep spatio-temporal prediction models by reducing up to 10.7%
prediction error.
| [
{
"version": "v1",
"created": "Thu, 1 Feb 2018 16:52:42 GMT"
},
{
"version": "v2",
"created": "Sat, 19 May 2018 04:38:57 GMT"
}
] | 1,526,947,200,000 | [
[
"Wang",
"Leye",
""
],
[
"Geng",
"Xu",
""
],
[
"Ma",
"Xiaojuan",
""
],
[
"Liu",
"Feng",
""
],
[
"Yang",
"Qiang",
""
]
] |
1802.00682 | Finale Doshi-Velez | Menaka Narayanan, Emily Chen, Jeffrey He, Been Kim, Sam Gershman,
Finale Doshi-Velez | How do Humans Understand Explanations from Machine Learning Systems? An
Evaluation of the Human-Interpretability of Explanation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have seen a boom in interest in machine learning systems that
can provide a human-understandable rationale for their predictions or
decisions. However, exactly what kinds of explanation are truly
human-interpretable remains poorly understood. This work advances our
understanding of what makes explanations interpretable in the specific context
of verification. Suppose we have a machine learning system that predicts X, and
we provide rationale for this prediction X. Given an input, an explanation, and
an output, is the output consistent with the input and the supposed rationale?
Via a series of user-studies, we identify what kinds of increases in complexity
have the greatest effect on the time it takes for humans to verify the
rationale, and which seem relatively insensitive.
| [
{
"version": "v1",
"created": "Fri, 2 Feb 2018 13:53:13 GMT"
}
] | 1,517,788,800,000 | [
[
"Narayanan",
"Menaka",
""
],
[
"Chen",
"Emily",
""
],
[
"He",
"Jeffrey",
""
],
[
"Kim",
"Been",
""
],
[
"Gershman",
"Sam",
""
],
[
"Doshi-Velez",
"Finale",
""
]
] |
1802.00690 | Peter Bruza | Peter D. Bruza | Modelling contextuality by probabilistic programs with hypergraph
semantics | Accepted for "Theoretical Computer Science" | null | 10.1016/j.tcs.2017.11.028 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of a phenomenon are often developed by examining it under different
experimental conditions, or measurement contexts. The resultant probabilistic
models assume that the underlying random variables, which define a measurable
set of outcomes, can be defined independent of the measurement context. The
phenomenon is deemed contextual when this assumption fails. Contextuality is an
important issue in quantum physics. However, there has been growing speculation
that it manifests outside the quantum realm with human cognition being a
particularly prominent area of investigation. This article contributes the
foundations of a probabilistic programming language that allows convenient
exploration of contextuality in wide range of applications relevant to
cognitive science and artificial intelligence. Specific syntax is proposed to
allow the specification of "measurement contexts". Each such context delivers a
partial model of the phenomenon based on the associated experimental condition
described by the measurement context. The probabilistic program is translated
into a hypergraph in a modular way. Recent theoretical results from the field
of quantum physics show that contextuality can be equated with the possibility
of constructing a probabilistic model on the resulting hypergraph. The use of
hypergraphs opens the door for a theoretically succinct and efficient
computational semantics sensitive to modelling both contextual and
non-contextual phenomena. Finally, this article raises awareness of
contextuality beyond quantum physics and to contribute formal methods to detect
its presence by means of hypergraph semantics.
| [
{
"version": "v1",
"created": "Wed, 31 Jan 2018 22:19:41 GMT"
}
] | 1,517,788,800,000 | [
[
"Bruza",
"Peter D.",
""
]
] |
1802.01013 | Sarath Sreedharan | Tathagata Chakraborti, Sarath Sreedharan, Sachin Grover, Subbarao
Kambhampati | Plan Explanations as Model Reconciliation -- An Empirical Study | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work in explanation generation for decision making agents has looked
at how unexplained behavior of autonomous systems can be understood in terms of
differences in the model of the system and the human's understanding of the
same, and how the explanation process as a result of this mismatch can be then
seen as a process of reconciliation of these models. Existing algorithms in
such settings, while having been built on contrastive, selective and social
properties of explanations as studied extensively in the psychology literature,
have not, to the best of our knowledge, been evaluated in settings with actual
humans in the loop. As such, the applicability of such explanations to human-AI
and human-robot interactions remains suspect. In this paper, we set out to
evaluate these explanation generation algorithms in a series of studies in a
mock search and rescue scenario with an internal semi-autonomous robot and an
external human commander. We demonstrate to what extent the properties of these
algorithms hold as they are evaluated by humans, and how the dynamics of trust
between the human and the robot evolve during the process of these
interactions.
| [
{
"version": "v1",
"created": "Sat, 3 Feb 2018 19:17:58 GMT"
}
] | 1,517,875,200,000 | [
[
"Chakraborti",
"Tathagata",
""
],
[
"Sreedharan",
"Sarath",
""
],
[
"Grover",
"Sachin",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1802.01173 | Zhi-Hua Zhou | Wang-Zhou Dai, Qiu-Ling Xu, Yang Yu, Zhi-Hua Zhou | Tunneling Neural Perception and Logic Reasoning through Abductive
Learning | Corrected typos | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Perception and reasoning are basic human abilities that are seamlessly
connected as part of human intelligence. However, in current machine learning
systems, the perception and reasoning modules are incompatible. Tasks requiring
joint perception and reasoning ability are difficult to accomplish autonomously
and still demand human intervention. Inspired by the way language experts
decoded Mayan scripts by joining two abilities in an abductive manner, this
paper proposes the abductive learning framework. The framework learns
perception and reasoning simultaneously with the help of a trial-and-error
abductive process. We present the Neural-Logical Machine as an implementation
of this novel learning framework. We demonstrate that--using human-like
abductive learning--the machine learns from a small set of simple hand-written
equations and then generalizes well to complex equations, a feat that is beyond
the capability of state-of-the-art neural network models. The abductive
learning framework explores a new direction for approaching human-level
learning ability.
| [
{
"version": "v1",
"created": "Sun, 4 Feb 2018 18:27:53 GMT"
},
{
"version": "v2",
"created": "Tue, 6 Feb 2018 12:34:01 GMT"
}
] | 1,517,961,600,000 | [
[
"Dai",
"Wang-Zhou",
""
],
[
"Xu",
"Qiu-Ling",
""
],
[
"Yu",
"Yang",
""
],
[
"Zhou",
"Zhi-Hua",
""
]
] |
1802.01282 | Maria Dimakopoulou | Maria Dimakopoulou, Benjamin Van Roy | Coordinated Exploration in Concurrent Reinforcement Learning | null | Proceedings of the 35th International Conference on Machine
Learning, volume 80 of Proceedings of Machine Learning Research, pages
1271-1279, Stockholmsm\"assan, Stockholm Sweden, 10-15 Jul 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a team of reinforcement learning agents that concurrently learn
to operate in a common environment. We identify three properties - adaptivity,
commitment, and diversity - which are necessary for efficient coordinated
exploration and demonstrate that straightforward extensions to single-agent
optimistic and posterior sampling approaches fail to satisfy them. As an
alternative, we propose seed sampling, which extends posterior sampling in a
manner that meets these requirements. Simulation results investigate how
per-agent regret decreases as the number of agents grows, establishing
substantial advantages of seed sampling over alternative exploration schemes.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2018 06:51:12 GMT"
}
] | 1,545,091,200,000 | [
[
"Dimakopoulou",
"Maria",
""
],
[
"Van Roy",
"Benjamin",
""
]
] |
1802.01482 | Jo\~ao Pedro Pedroso | Jo\~ao Pedro Pedroso, Alpar Vajk Kramer, Ke Zhang | The Sea Exploration Problem: Data-driven Orienteering on a Continuous
Surface | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a problem arising in sea exploration, where the aim is
to schedule the expedition of a ship for collecting information about the
resources on the seafloor. The aim is to collect data by probing on a set of
carefully chosen locations, so that the information available is optimally
enriched. This problem has similarities with the orienteering problem, where
the aim is to plan a time-limited trip for visiting a set of vertices,
collecting a prize at each of them, in such a way that the total value
collected is maximum. In our problem, the score at each vertex is associated
with an estimation of the level of the resource on the given surface, which is
done by regression using Gaussian processes. Hence, there is a correlation
among scores on the selected vertices; this is a first difference with respect
to the standard orienteering problem. The second difference is the location of
each vertex, which in our problem is a freely chosen point on a given surface.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2018 15:57:15 GMT"
},
{
"version": "v2",
"created": "Sat, 6 Apr 2019 05:24:48 GMT"
}
] | 1,554,768,000,000 | [
[
"Pedroso",
"João Pedro",
""
],
[
"Kramer",
"Alpar Vajk",
""
],
[
"Zhang",
"Ke",
""
]
] |
1802.01518 | Isaac Sledge | Isaac J. Sledge and Matthew S. Emigh and Jose C. Principe | Guided Policy Exploration for Markov Decision Processes using an
Uncertainty-Based Value-of-Information Criterion | IEEE Transactions on Neural Networks and Learning Systems | null | 10.1109/TNNLS.2018.2812709 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning in environments with many action-state pairs is
challenging. At issue is the number of episodes needed to thoroughly search the
policy space. Most conventional heuristics address this search problem in a
stochastic manner. This can leave large portions of the policy space unvisited
during the early training stages. In this paper, we propose an
uncertainty-based, information-theoretic approach for performing guided
stochastic searches that more effectively cover the policy space. Our approach
is based on the value of information, a criterion that provides the optimal
trade-off between expected costs and the granularity of the search process. The
value of information yields a stochastic routine for choosing actions during
learning that can explore the policy space in a coarse to fine manner. We
augment this criterion with a state-transition uncertainty factor, which guides
the search process into previously unexplored regions of the policy space.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2018 17:24:13 GMT"
}
] | 1,520,294,400,000 | [
[
"Sledge",
"Isaac J.",
""
],
[
"Emigh",
"Matthew S.",
""
],
[
"Principe",
"Jose C.",
""
]
] |
1802.01526 | Ryuta Arisaka | Ryuta Arisaka, Jeremie Dauphin | Abstractly Interpreting Argumentation Frameworks for Sharpening
Extensions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cycles of attacking arguments pose non-trivial issues in Dung style
argumentation theory, apparent behavioural difference between odd and even
length cycles being a notable one. While a few methods were proposed for
treating them, to - in particular - enable selection of acceptable arguments in
an odd-length cycle when Dung semantics could select none, so far the issues
have been observed from a purely argument-graph-theoretic perspective. Per
contra, we consider argument graphs together with a certain lattice like
semantic structure over arguments e.g. ontology. As we show, the
semantic-argumentgraphic hybrid theory allows us to apply abstract
interpretation, a widely known methodology in static program analysis, to
formal argumentation. With this, even where no arguments in a cycle could be
selected sensibly, we could say more about arguments acceptability of an
argument framework that contains it. In a certain sense, we can verify Dung
extensions with respect to a semantic structure in this hybrid theory, to
consolidate our confidence in their suitability. By defining the theory, and by
making comparisons to existing approaches, we ultimately discover that whether
Dung semantics, or an alternative semantics such as cf2, is adequate or
problematic depends not just on an argument graph but also on the semantic
relation among the arguments in the graph.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2018 17:36:40 GMT"
}
] | 1,517,875,200,000 | [
[
"Arisaka",
"Ryuta",
""
],
[
"Dauphin",
"Jeremie",
""
]
] |
1802.01604 | Chandrayee Basu | Chandrayee Basu, Mukesh Singhal, Anca D. Dragan | Learning from Richer Human Guidance: Augmenting Comparison-Based
Learning with Feature Queries | 8 pages, 8 figures, HRI 2018 | null | 10.1145/3171221.3171284 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We focus on learning the desired objective function for a robot. Although
trajectory demonstrations can be very informative of the desired objective,
they can also be difficult for users to provide. Answers to comparison queries,
asking which of two trajectories is preferable, are much easier for users, and
have emerged as an effective alternative. Unfortunately, comparisons are far
less informative. We propose that there is much richer information that users
can easily provide and that robots ought to leverage. We focus on augmenting
comparisons with feature queries, and introduce a unified formalism for
treating all answers as observations about the true desired reward. We derive
an active query selection algorithm, and test these queries in simulation and
on real users. We find that richer, feature-augmented queries can extract more
information faster, leading to robots that better match user preferences in
their behavior.
| [
{
"version": "v1",
"created": "Mon, 5 Feb 2018 19:03:26 GMT"
}
] | 1,517,961,600,000 | [
[
"Basu",
"Chandrayee",
""
],
[
"Singhal",
"Mukesh",
""
],
[
"Dragan",
"Anca D.",
""
]
] |
1802.02172 | Alexander Gorban | Alexander N. Gorban, Bogdan Grechuk, Ivan Y. Tyukin | Augmented Artificial Intelligence: a Conceptual Framework | The mathematical part is significantly extended. New stochastic
separation theorems are proven for log-concave distributions. Some previously
formulated hypotheses are confirmed | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | All artificial Intelligence (AI) systems make errors. These errors are
unexpected, and differ often from the typical human mistakes ("non-human"
errors). The AI errors should be corrected without damage of existing skills
and, hopefully, avoiding direct human expertise. This paper presents an initial
summary report of project taking new and systematic approach to improving the
intellectual effectiveness of the individual AI by communities of AIs. We
combine some ideas of learning in heterogeneous multiagent systems with new and
original mathematical approaches for non-iterative corrections of errors of
legacy AI systems. The mathematical foundations of AI non-destructive
correction are presented and a series of new stochastic separation theorems is
proven. These theorems provide a new instrument for the development, analysis,
and assessment of machine learning methods and algorithms in high dimension.
They demonstrate that in high dimensions and even for exponentially large
samples, linear classifiers in their classical Fisher's form are powerful
enough to separate errors from correct responses with high probability and to
provide efficient solution to the non-destructive corrector problem. In
particular, we prove some hypotheses formulated in our paper `Stochastic
Separation Theorems' (Neural Networks, 94, 255--259, 2017), and answer one
general problem published by Donoho and Tanner in 2009.
| [
{
"version": "v1",
"created": "Tue, 6 Feb 2018 19:05:27 GMT"
},
{
"version": "v2",
"created": "Wed, 28 Feb 2018 16:19:55 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Mar 2018 13:40:05 GMT"
}
] | 1,522,195,200,000 | [
[
"Gorban",
"Alexander N.",
""
],
[
"Grechuk",
"Bogdan",
""
],
[
"Tyukin",
"Ivan Y.",
""
]
] |
1802.02434 | Junhua Wu | Junhua Wu and Sergey Polyakovskiy and Markus Wagner and Frank Neumann | Evolutionary Computation plus Dynamic Programming for the Bi-Objective
Travelling Thief Problem | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research proposes a novel indicator-based hybrid evolutionary approach
that combines approximate and exact algorithms. We apply it to a new
bi-criteria formulation of the travelling thief problem, which is known to the
Evolutionary Computation community as a benchmark multi-component optimisation
problem that interconnects two classical NP-hard problems: the travelling
salesman problem and the 0-1 knapsack problem. Our approach employs the exact
dynamic programming algorithm for the underlying Packing-While-Travelling (PWT)
problem as a subroutine within a bi-objective evolutionary algorithm. This
design takes advantage of the data extracted from Pareto fronts generated by
the dynamic program to achieve better solutions. Furthermore, we develop a
number of novel indicators and selection mechanisms to strengthen synergy of
the two algorithmic components of our approach. The results of computational
experiments show that the approach is capable to outperform the
state-of-the-art results for the single-objective case of the problem.
| [
{
"version": "v1",
"created": "Wed, 7 Feb 2018 14:34:53 GMT"
}
] | 1,518,048,000,000 | [
[
"Wu",
"Junhua",
""
],
[
"Polyakovskiy",
"Sergey",
""
],
[
"Wagner",
"Markus",
""
],
[
"Neumann",
"Frank",
""
]
] |
1802.02468 | Mauro Scanagatta | Mauro Scanagatta, Giorgio Corani, Marco Zaffalon, Jaemin Yoo, U Kang | Efficient Learning of Bounded-Treewidth Bayesian Networks from Complete
and Incomplete Data Sets | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learning a Bayesian networks with bounded treewidth is important for reducing
the complexity of the inferences. We present a novel anytime algorithm (k-MAX)
method for this task, which scales up to thousands of variables. Through
extensive experiments we show that it consistently yields higher-scoring
structures than its competitors on complete data sets. We then consider the
problem of structure learning from incomplete data sets. This can be addressed
by structural EM, which however is computationally very demanding. We thus
adopt the novel k-MAX algorithm in the maximization step of structural EM,
obtaining an efficient computation of the expected sufficient statistics. We
test the resulting structural EM method on the task of imputing missing data,
comparing it against the state-of-the-art approach based on random forests. Our
approach achieves the same imputation accuracy of the competitors, but in about
one tenth of the time. Furthermore we show that it has worst-case complexity
linear in the input size, and that it is easily parallelizable.
| [
{
"version": "v1",
"created": "Wed, 7 Feb 2018 15:09:32 GMT"
}
] | 1,518,048,000,000 | [
[
"Scanagatta",
"Mauro",
""
],
[
"Corani",
"Giorgio",
""
],
[
"Zaffalon",
"Marco",
""
],
[
"Yoo",
"Jaemin",
""
],
[
"Kang",
"U",
""
]
] |
1802.03216 | Jordi Grau-Moya | Jordi Grau-Moya and Felix Leibfried and Haitham Bou-Ammar | Balancing Two-Player Stochastic Games with Soft Q-Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Within the context of video games the notion of perfectly rational agents can
be undesirable as it leads to uninteresting situations, where humans face tough
adversarial decision makers. Current frameworks for stochastic games and
reinforcement learning prohibit tuneable strategies as they seek optimal
performance. In this paper, we enable such tuneable behaviour by generalising
soft Q-learning to stochastic games, where more than one agent interact
strategically. We contribute both theoretically and empirically. On the theory
side, we show that games with soft Q-learning exhibit a unique value and
generalise team games and zero-sum games far beyond these two extremes to cover
a continuous spectrum of gaming behaviour. Experimentally, we show how tuning
agents' constraints affect performance and demonstrate, through a neural
network architecture, how to reliably balance games with high-dimensional
representations.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2018 12:03:15 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Jan 2019 13:14:52 GMT"
}
] | 1,546,992,000,000 | [
[
"Grau-Moya",
"Jordi",
""
],
[
"Leibfried",
"Felix",
""
],
[
"Bou-Ammar",
"Haitham",
""
]
] |
1802.03417 | C\'edric Beaulac | C\'edric Beaulac and Fabrice Larribe | Narrow Artificial Intelligence with Machine Learning for Real-Time
Estimation of a Mobile Agents Location Using Hidden Markov Models | null | International Journal of Computer Games Technology, vol. 2017,
Article ID 4939261, 10 pages, 2017 | 10.1155/2017/4939261 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We propose to use a supervised machine learning technique to track the
location of a mobile agent in real time. Hidden Markov Models are used to build
artificial intelligence that estimates the unknown position of a mobile target
moving in a defined environment. This narrow artificial intelligence performs
two distinct tasks. First, it provides real-time estimation of the mobile
agent's position using the forward algorithm. Second, it uses the Baum-Welch
algorithm as a statistical learning tool to gain knowledge of the mobile
target. Finally, an experimental environment is proposed, namely a video game
that we use to test our artificial intelligence. We present statistical and
graphical results to illustrate the efficiency of our method.
| [
{
"version": "v1",
"created": "Fri, 9 Feb 2018 19:18:21 GMT"
}
] | 1,518,480,000,000 | [
[
"Beaulac",
"Cédric",
""
],
[
"Larribe",
"Fabrice",
""
]
] |
1802.03493 | Yinlam Chow | Mehrdad Farajtabar, Yinlam Chow, and Mohammad Ghavamzadeh | More Robust Doubly Robust Off-policy Evaluation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the problem of off-policy evaluation (OPE) in reinforcement learning
(RL), where the goal is to estimate the performance of a policy from the data
generated by another policy(ies). In particular, we focus on the doubly robust
(DR) estimators that consist of an importance sampling (IS) component and a
performance model, and utilize the low (or zero) bias of IS and low variance of
the model at the same time. Although the accuracy of the model has a huge
impact on the overall performance of DR, most of the work on using the DR
estimators in OPE has been focused on improving the IS part, and not much on
how to learn the model. In this paper, we propose alternative DR estimators,
called more robust doubly robust (MRDR), that learn the model parameter by
minimizing the variance of the DR estimator. We first present a formulation for
learning the DR model in RL. We then derive formulas for the variance of the DR
estimator in both contextual bandits and RL, such that their gradients
w.r.t.~the model parameters can be estimated from the samples, and propose
methods to efficiently minimize the variance. We prove that the MRDR estimators
are strongly consistent and asymptotically optimal. Finally, we evaluate MRDR
in bandits and RL benchmark problems, and compare its performance with the
existing methods.
| [
{
"version": "v1",
"created": "Sat, 10 Feb 2018 01:32:03 GMT"
},
{
"version": "v2",
"created": "Wed, 23 May 2018 18:13:43 GMT"
}
] | 1,527,206,400,000 | [
[
"Farajtabar",
"Mehrdad",
""
],
[
"Chow",
"Yinlam",
""
],
[
"Ghavamzadeh",
"Mohammad",
""
]
] |
1802.03642 | Krishnendu Chatterjee | Krishnendu Chatterjee and Laurent Doyen | Graph Planning with Expected Finite Horizon | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph planning gives rise to fundamental algorithmic questions such as
shortest path, traveling salesman problem, etc. A classical problem in discrete
planning is to consider a weighted graph and construct a path that maximizes
the sum of weights for a given time horizon $T$. However, in many scenarios,
the time horizon is not fixed, but the stopping time is chosen according to
some distribution such that the expected stopping time is $T$. If the stopping
time distribution is not known, then to ensure robustness, the distribution is
chosen by an adversary, to represent the worst-case scenario.
A stationary plan for every vertex always chooses the same outgoing edge. For
fixed horizon or fixed stopping-time distribution, stationary plans are not
sufficient for optimality. Quite surprisingly we show that when an adversary
chooses the stopping-time distribution with expected stopping time $T$, then
stationary plans are sufficient. While computing optimal stationary plans for
fixed horizon is NP-complete, we show that computing optimal stationary plans
under adversarial stopping-time distribution can be achieved in polynomial
time. Consequently, our polynomial-time algorithm for adversarial stopping time
also computes an optimal plan among all possible plans.
| [
{
"version": "v1",
"created": "Sat, 10 Feb 2018 19:12:03 GMT"
}
] | 1,518,480,000,000 | [
[
"Chatterjee",
"Krishnendu",
""
],
[
"Doyen",
"Laurent",
""
]
] |
1802.04009 | Yuan Jin | Yuan Jin, Mark Carman, Ye Zhu, Wray Buntine | Distinguishing Question Subjectivity from Difficulty for Improved
Crowdsourcing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | The questions in a crowdsourcing task typically exhibit varying degrees of
difficulty and subjectivity. Their joint effects give rise to the variation in
responses to the same question by different crowd-workers. This variation is
low when the question is easy to answer and objective, and high when it is
difficult and subjective. Unfortunately, current quality control methods for
crowdsourcing consider only the question difficulty to account for the
variation. As a result,these methods cannot distinguish workers personal
preferences for different correct answers of a partially subjective question
from their ability/expertise to avoid objectively wrong answers for that
question. To address this issue, we present a probabilistic model which (i)
explicitly encodes question difficulty as a model parameter and (ii) implicitly
encodes question subjectivity via latent preference factors for crowd-workers.
We show that question subjectivity induces grouping of crowd-workers, revealed
through clustering of their latent preferences. Moreover, we develop a
quantitative measure of the subjectivity of a question. Experiments show that
our model(1) improves the performance of both quality control for crowd-sourced
answers and next answer prediction for crowd-workers,and (2) can potentially
provide coherent rankings of questions in terms of their difficulty and
subjectivity, so that task providers can refine their designs of the
crowdsourcing tasks, e.g. by removing highly subjective questions or
inappropriately difficult questions.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2018 12:39:28 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Feb 2018 04:44:00 GMT"
}
] | 1,518,652,800,000 | [
[
"Jin",
"Yuan",
""
],
[
"Carman",
"Mark",
""
],
[
"Zhu",
"Ye",
""
],
[
"Buntine",
"Wray",
""
]
] |
1802.04086 | Evangelos Michelioudakis | Elias Alevizos, Alexander Artikis, Nikos Katzouris, Evangelos
Michelioudakis, Georgios Paliouras | The Complex Event Recognition Group | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Complex Event Recognition (CER) group is a research team, affiliated with
the National Centre of Scientific Research "Demokritos" in Greece. The CER
group works towards advanced and efficient methods for the recognition of
complex events in a multitude of large, heterogeneous and interdependent data
streams. Its research covers multiple aspects of complex event recognition,
from efficient detection of patterns on event streams to handling uncertainty
and noise in streams, and machine learning techniques for inferring interesting
patterns. Lately, it has expanded to methods for forecasting the occurrence of
events. It was founded in 2009 and currently hosts 3 senior researchers, 5 PhD
students and works regularly with under-graduate students.
| [
{
"version": "v1",
"created": "Mon, 12 Feb 2018 14:54:30 GMT"
}
] | 1,518,480,000,000 | [
[
"Alevizos",
"Elias",
""
],
[
"Artikis",
"Alexander",
""
],
[
"Katzouris",
"Nikos",
""
],
[
"Michelioudakis",
"Evangelos",
""
],
[
"Paliouras",
"Georgios",
""
]
] |
1802.04093 | Subhash Kak | Subhash Kak | Reasoning in a Hierarchical System with Missing Group Size Information | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper analyzes the problem of judgments or preferences subsequent to
initial analysis by autonomous agents in a hierarchical system where the higher
level agents does not have access to group size information. We propose methods
that reduce instances of preference reversal of the kind encountered in
Simpson's paradox.
| [
{
"version": "v1",
"created": "Wed, 7 Feb 2018 23:08:31 GMT"
}
] | 1,518,480,000,000 | [
[
"Kak",
"Subhash",
""
]
] |
1802.04095 | Tevfik Bulut Industry and Technology Specialist | Tevfik Bulut | A New Multi Criteria Decision Making Method: Approach of Logarithmic
Concept (APLOCO) | 19 pages | International Journal of Artificial Intelligence and Applications
(IJAIA), Vol.9, No.1, January 2018. p.15-33 | 10.5121/ijaia.2018.9102 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primary aim of the study is to introduce APLOCO method which is developed
for the solution of multicriteria decision making problems both theoretically
and practically. In this context, application subject of APLACO constitutes
evaluation of investment potential of different cities in metropolitan status
in Turkey. The secondary purpose of the study is to identify the independent
variables affecting the factories in the operating phase and to estimate the
effect levels of independent variables on the dependent variable in the
organized industrial zones (OIZs), whose mission is to reduce regional
development disparities and to mobilize local production dynamics. For this
purpose, the effect levels of independent variables on dependent variables have
been determined using the multilayer perceptron (MLP) method, which has a wide
use in artificial neural networks (ANNs). The effect levels derived from MLP
have been then used as the weight levels of the decision criteria in APLOCO.
The independent variables included in MLP are also used as the decision
criteria in APLOCO. According to the results obtained from APLOCO, Istanbul
city is the best alternative in term of the investment potential and other
alternatives are Manisa, Denizli, Izmir, Kocaeli, Bursa, Ankara, Adana, and
Antalya, respectively. Although APLOCO is used to solve the ranking problem in
order to show application process in the paper, it can be employed easily in
the solution of classification and selection problems. On the other hand, the
study also shows a rare example of the nested usage of APLOCO which is one of
the methods of operation research as well as MLP used in determination of
weights.
| [
{
"version": "v1",
"created": "Thu, 8 Feb 2018 20:19:34 GMT"
}
] | 1,518,480,000,000 | [
[
"Bulut",
"Tevfik",
""
]
] |
1802.04451 | Tshilidzi Marwala | Tshilidzi Marwala and Bo Xing | Blockchain and Artificial Intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is undeniable that artificial intelligence (AI) and blockchain concepts
are spreading at a phenomenal rate. Both technologies have distinct degree of
technological complexity and multi-dimensional business implications. However,
a common misunderstanding about blockchain concept, in particular, is that
blockchain is decentralized and is not controlled by anyone. But the underlying
development of a blockchain system is still attributed to a cluster of core
developers. Take smart contract as an example, it is essentially a collection
of codes (or functions) and data (or states) that are programmed and deployed
on a blockchain (say, Ethereum) by different human programmers. It is thus,
unfortunately, less likely to be free of loopholes and flaws. In this article,
through a brief overview about how artificial intelligence could be used to
deliver bug-free smart contract so as to achieve the goal of blockchain 2.0, we
to emphasize that the blockchain implementation can be assisted or enhanced via
various AI techniques. The alliance of AI and blockchain is expected to create
numerous possibilities.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2018 03:10:59 GMT"
},
{
"version": "v2",
"created": "Tue, 23 Oct 2018 15:43:32 GMT"
}
] | 1,540,339,200,000 | [
[
"Marwala",
"Tshilidzi",
""
],
[
"Xing",
"Bo",
""
]
] |
1802.04520 | Max Ferguson | M Ferguson, K. H. Law | Learning Robust and Adaptive Real-World Continuous Control Using
Simulation and Transfer Learning | The paper has several technical errors. Rather than correct these
errors we have chosen to significantly reformulate the work | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use model-free reinforcement learning, extensive simulation, and transfer
learning to develop a continuous control algorithm that has good zero-shot
performance in a real physical environment. We train a simulated agent to act
optimally across a set of similar environments, each with dynamics drawn from a
prior distribution. We propose that the agent is able to adjust its actions
almost immediately, based on small set of observations. This robust and
adaptive behavior is enabled by using a policy gradient algorithm with an Long
Short Term Memory (LSTM) function approximation. Finally, we train an agent to
navigate a two-dimensional environment with uncertain dynamics and noisy
observations. We demonstrate that this agent has good zero-shot performance in
a real physical environment. Our preliminary results indicate that the agent is
able to infer the environmental dynamics after only a few timesteps, and adjust
its actions accordingly.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2018 09:23:14 GMT"
},
{
"version": "v2",
"created": "Thu, 8 Mar 2018 07:43:44 GMT"
}
] | 1,520,553,600,000 | [
[
"Ferguson",
"M",
""
],
[
"Law",
"K. H.",
""
]
] |
1802.04592 | Ling Pan | Ling Pan and Qingpeng Cai and Zhixuan Fang and Pingzhong Tang and
Longbo Huang | A Deep Reinforcement Learning Framework for Rebalancing Dockless Bike
Sharing Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bike sharing provides an environment-friendly way for traveling and is
booming all over the world. Yet, due to the high similarity of user travel
patterns, the bike imbalance problem constantly occurs, especially for dockless
bike sharing systems, causing significant impact on service quality and company
revenue. Thus, it has become a critical task for bike sharing systems to
resolve such imbalance efficiently. In this paper, we propose a novel deep
reinforcement learning framework for incentivizing users to rebalance such
systems. We model the problem as a Markov decision process and take both
spatial and temporal features into consideration. We develop a novel deep
reinforcement learning algorithm called Hierarchical Reinforcement Pricing
(HRP), which builds upon the Deep Deterministic Policy Gradient algorithm.
Different from existing methods that often ignore spatial information and rely
heavily on accurate prediction, HRP captures both spatial and temporal
dependencies using a divide-and-conquer structure with an embedded localized
module. We conduct extensive experiments to evaluate HRP, based on a dataset
from Mobike, a major Chinese dockless bike sharing company. Results show that
HRP performs close to the 24-timeslot look-ahead optimization, and outperforms
state-of-the-art methods in both service level and bike distribution. It also
transfers well when applied to unseen areas.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2018 12:43:03 GMT"
},
{
"version": "v2",
"created": "Mon, 21 May 2018 02:42:48 GMT"
},
{
"version": "v3",
"created": "Mon, 10 Sep 2018 15:57:01 GMT"
},
{
"version": "v4",
"created": "Sun, 2 Dec 2018 05:02:51 GMT"
}
] | 1,543,881,600,000 | [
[
"Pan",
"Ling",
""
],
[
"Cai",
"Qingpeng",
""
],
[
"Fang",
"Zhixuan",
""
],
[
"Tang",
"Pingzhong",
""
],
[
"Huang",
"Longbo",
""
]
] |
1802.04818 | Peter Clark | Peter Clark | Story Generation and Aviation Incident Representation | null | null | null | Working Note 14 (1999) | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This working note discusses the topic of story generation, with a view to
identifying the knowledge required to understand aviation incident narratives
(which have structural similarities to stories), following the premise that to
understand aviation incidents, one should at least be able to generate examples
of them. We give a brief overview of aviation incidents and their relation to
stories, and then describe two of our earlier attempts (using `scripts' and
`story grammars') at incident generation which did not evolve promisingly.
Following this, we describe a simple incident generator which did work (at a
`toy' level), using a `world simulation' approach. This generator is based on
Meehan's TALE-SPIN story generator (1977). We conclude with a critique of the
approach.
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2018 19:03:21 GMT"
}
] | 1,518,652,800,000 | [
[
"Clark",
"Peter",
""
]
] |
1802.05142 | Ram\'on Pino P\'erez | Isabelle Bloch, J\'er\^ome Lang, Ram\'on Pino P\'erez, Carlos
Uzc\'ategui | Morphologic for knowledge dynamics: revision, fusion, abduction | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several tasks in artificial intelligence require to be able to find models
about knowledge dynamics. They include belief revision, fusion and belief
merging, and abduction. In this paper we exploit the algebraic framework of
mathematical morphology in the context of propositional logic, and define
operations such as dilation or erosion of a set of formulas. We derive concrete
operators, based on a semantic approach, that have an intuitive interpretation
and that are formally well behaved, to perform revision, fusion and abduction.
Computation and tractability are addressed, and simple examples illustrate the
typical results that can be obtained.
| [
{
"version": "v1",
"created": "Wed, 14 Feb 2018 15:08:06 GMT"
}
] | 1,518,652,800,000 | [
[
"Bloch",
"Isabelle",
""
],
[
"Lang",
"Jérôme",
""
],
[
"Pérez",
"Ramón Pino",
""
],
[
"Uzcátegui",
"Carlos",
""
]
] |
1802.05219 | Michael Green | Gabriella A. B. Barros, Michael Cerny Green, Antonios Liapis, and
Julian Togelius | Who Killed Albert Einstein? From Open Data to Murder Mystery Games | 11 pages, 6 figures, 2 tables | 10.1109/TG.2018.2806190 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a framework for generating adventure games from open
data. Focusing on the murder mystery type of adventure games, the generator is
able to transform open data from Wikipedia articles, OpenStreetMap and images
from Wikimedia Commons into WikiMysteries. Every WikiMystery game revolves
around the murder of a person with a Wikipedia article and populates the game
with suspects who must be arrested by the player if guilty of the murder or
absolved if innocent. Starting from only one person as the victim, an extensive
generative pipeline finds suspects, their alibis, and paths connecting them
from open data, transforms open data into cities, buildings, non-player
characters, locks and keys and dialog options. The paper describes in detail
each generative step, provides a specific playthrough of one WikiMystery where
Albert Einstein is murdered, and evaluates the outcomes of games generated for
the 100 most influential people of the 20th century.
| [
{
"version": "v1",
"created": "Wed, 14 Feb 2018 17:17:54 GMT"
}
] | 1,519,084,800,000 | [
[
"Barros",
"Gabriella A. B.",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Togelius",
"Julian",
""
]
] |
1802.05340 | Fei Wang | Fei Wang, Tiark Rompf | From Gameplay to Symbolic Reasoning: Learning SAT Solver Heuristics in
the Style of Alpha(Go) Zero | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the recent successes of deep neural networks in various fields such
as image and speech recognition, natural language processing, and reinforcement
learning, we still face big challenges in bringing the power of numeric
optimization to symbolic reasoning. Researchers have proposed different avenues
such as neural machine translation for proof synthesis, vectorization of
symbols and expressions for representing symbolic patterns, and coupling of
neural back-ends for dimensionality reduction with symbolic front-ends for
decision making. However, these initial explorations are still only point
solutions, and bear other shortcomings such as lack of correctness guarantees.
In this paper, we present our approach of casting symbolic reasoning as games,
and directly harnessing the power of deep reinforcement learning in the style
of Alpha(Go) Zero on symbolic problems. Using the Boolean Satisfiability (SAT)
problem as showcase, we demonstrate the feasibility of our method, and the
advantages of modularity, efficiency, and correctness guarantees.
| [
{
"version": "v1",
"created": "Wed, 14 Feb 2018 22:25:47 GMT"
}
] | 1,518,739,200,000 | [
[
"Wang",
"Fei",
""
],
[
"Rompf",
"Tiark",
""
]
] |
1802.05639 | Alessandro Antonucci | Sabina Marchetti and Alessandro Antonucci | Reliable Uncertain Evidence Modeling in Bayesian Networks by Credal
Networks | 19 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A reliable modeling of uncertain evidence in Bayesian networks based on a
set-valued quantification is proposed. Both soft and virtual evidences are
considered. We show that evidence propagation in this setup can be reduced to
standard updating in an augmented credal network, equivalent to a set of
consistent Bayesian networks. A characterization of the computational
complexity for this task is derived together with an efficient exact procedure
for a subclass of instances. In the case of multiple uncertain evidences over
the same variable, the proposed procedure can provide a set-valued version of
the geometric approach to opinion pooling.
| [
{
"version": "v1",
"created": "Thu, 15 Feb 2018 16:25:47 GMT"
}
] | 1,518,739,200,000 | [
[
"Marchetti",
"Sabina",
""
],
[
"Antonucci",
"Alessandro",
""
]
] |
1802.05835 | Siddharth Srivastava | Siddharth Srivastava, Nishant Desai, Richard Freedman, Shlomo
Zilberstein | An Anytime Algorithm for Task and Motion MDPs | 7 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrated task and motion planning has emerged as a challenging problem in
sequential decision making, where a robot needs to compute high-level strategy
and low-level motion plans for solving complex tasks. While high-level
strategies require decision making over longer time-horizons and scales, their
feasibility depends on low-level constraints based upon the geometries and
continuous dynamics of the environment. The hybrid nature of this problem makes
it difficult to scale; most existing approaches focus on deterministic, fully
observable scenarios. We present a new approach where the high-level decision
problem occurs in a stochastic setting and can be modeled as a Markov decision
process. In contrast to prior efforts, we show that complete MDP policies, or
contingent behaviors, can be computed effectively in an anytime fashion. Our
algorithm continuously improves the quality of the solution and is guaranteed
to be probabilistically complete. We evaluate the performance of our approach
on a challenging, realistic test problem: autonomous aircraft inspection. Our
results show that we can effectively compute consistent task and motion
policies for the most likely execution-time outcomes using only a fraction of
the computation required to develop the complete task and motion policy.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2018 04:52:58 GMT"
}
] | 1,518,998,400,000 | [
[
"Srivastava",
"Siddharth",
""
],
[
"Desai",
"Nishant",
""
],
[
"Freedman",
"Richard",
""
],
[
"Zilberstein",
"Shlomo",
""
]
] |
1802.05875 | Zolt\'an Kov\'acs | Zolt\'an Kov\'acs, Tom\'as Recio, M. Pilar V\'elez | Detecting truth, just on parts | 18 pages, 5 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and discuss, through a computational algebraic geometry
approach, the automatic reasoning handling of propositions that are
simultaneously true and false over some relevant collections of instances. A
rigorous, algorithmic criterion is presented for detecting such cases, and its
performance is exemplified through the implementation of this test on the
dynamic geometry program GeoGebra.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2018 09:24:29 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Mar 2018 19:45:41 GMT"
}
] | 1,522,195,200,000 | [
[
"Kovács",
"Zoltán",
""
],
[
"Recio",
"Tomás",
""
],
[
"Vélez",
"M. Pilar",
""
]
] |
1802.05944 | Hui Wang | Hui Wang, Michael Emmerich, Aske Plaat | Monte Carlo Q-learning for General Game Playing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | After the recent groundbreaking results of AlphaGo, we have seen a strong
interest in reinforcement learning in game playing. General Game Playing (GGP)
provides a good testbed for reinforcement learning. In GGP, a specification of
games rules is given. GGP problems can be solved by reinforcement learning.
Q-learning is one of the canonical reinforcement learning methods, and has been
used by (Banerjee & Stone, IJCAI 2007) in GGP. In this paper we implement
Q-learning in GGP for three small-board games (Tic-Tac-Toe, Connect Four, Hex),
to allow comparison to Banerjee et al. As expected, Q-learning converges,
although much slower than MCTS. Borrowing an idea from MCTS, we enhance
Q-learning with Monte Carlo Search, to give QM-learning. This enhancement
improves the performance of pure Q-learning. We believe that QM-learning can
also be used to improve performance of reinforcement learning further for
larger games, something which we will test in future work.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2018 14:18:46 GMT"
},
{
"version": "v2",
"created": "Mon, 21 May 2018 16:16:27 GMT"
}
] | 1,526,947,200,000 | [
[
"Wang",
"Hui",
""
],
[
"Emmerich",
"Michael",
""
],
[
"Plaat",
"Aske",
""
]
] |
1802.06068 | Peter Kokol PhD | Peter Kokol, Jernej Zavr\v{s}nik, Helena Bla\v{z}un Vo\v{s}ner | Artificial intelligence and pediatrics: A synthetic mini review | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The use of artificial intelligence intelligencein medicine can be traced back
to 1968 when Paycha published his paper Le diagnostic a l'aide d'intelligences
artificielle, presentation de la premiere machine diagnostri. Few years later
Shortliffe et al. presented an expert system named Mycin which was able to
identify bacteria causing severe blood infections and to recommend antibiotics.
Despite the fact that Mycin outperformed members of the Stanford medical school
in the reliability of diagnosis it was never used in practice due to a legal
issue who do you sue if it gives a wrong diagnosis?. However only in 2016 when
the artificial intelligence software built into the IBM Watson AI platform
correctly diagnosed and proposed an effective treatment for a 60-year-old
womans rare form of leukemia the AI use in medicine become really popular.On of
first papers presenting the use of AI in paediatrics was published in 1984. The
paper introduced a computer-assisted medical decision making system called
SHELP.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2018 18:51:27 GMT"
}
] | 1,518,998,400,000 | [
[
"Kokol",
"Peter",
""
],
[
"Završnik",
"Jernej",
""
],
[
"Vošner",
"Helena Blažun",
""
]
] |
1802.06137 | Anagha Kulkarni | Anagha Kulkarni, Siddharth Srivastava and Subbarao Kambhampati | A Unified Framework for Planning in Adversarial and Cooperative
Environments | 8 pages, 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Users of AI systems may rely upon them to produce plans for achieving desired
objectives. Such AI systems should be able to compute obfuscated plans whose
execution in adversarial situations protects privacy, as well as legible plans
which are easy for team members to understand in cooperative situations. We
develop a unified framework that addresses these dual problems by computing
plans with a desired level of comprehensibility from the point of view of a
partially informed observer. For adversarial settings, our approach produces
obfuscated plans with observations that are consistent with at least k goals
from a set of decoy goals. By slightly varying our framework, we present an
approach for goal legibility in cooperative settings which produces plans that
achieve a goal while being consistent with at most j goals from a set of
confounding goals. In addition, we show how the observability of the observer
can be controlled to either obfuscate or clarify the next actions in a plan
when the goal is known to the observer. We present theoretical results on the
complexity analysis of our problems. We demonstrate the execution of obfuscated
and legible plans in a cooking domain using a physical robot Fetch. We also
provide an empirical evaluation to show the feasibility and usefulness of our
approaches using IPC domains.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2018 21:53:59 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2018 04:51:37 GMT"
},
{
"version": "v3",
"created": "Thu, 26 Jul 2018 04:28:41 GMT"
}
] | 1,532,649,600,000 | [
[
"Kulkarni",
"Anagha",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1802.06215 | Panpan Cai | Panpan Cai, Yuanfu Luo, David Hsu and Wee Sun Lee | HyP-DESPOT: A Hybrid Parallel Algorithm for Online Planning under
Uncertainty | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Planning under uncertainty is critical for robust robot performance in
uncertain, dynamic environments, but it incurs high computational cost.
State-of-the-art online search algorithms, such as DESPOT, have vastly improved
the computational efficiency of planning under uncertainty and made it a
valuable tool for robotics in practice. This work takes one step further by
leveraging both CPU and GPU parallelization in order to achieve near real-time
online planning performance for complex tasks with large state, action, and
observation spaces. Specifically, we propose Hybrid Parallel DESPOT
(HyP-DESPOT), a massively parallel online planning algorithm that integrates
CPU and GPU parallelism in a multi-level scheme. It performs parallel DESPOT
tree search by simultaneously traversing multiple independent paths using
multi-core CPUs and performs parallel Monte-Carlo simulations at the leaf nodes
of the search tree using GPUs. Experimental results show that HyP-DESPOT speeds
up online planning by up to several hundred times, compared with the original
DESPOT algorithm, in several challenging robotic tasks in simulation.
| [
{
"version": "v1",
"created": "Sat, 17 Feb 2018 08:59:56 GMT"
}
] | 1,519,084,800,000 | [
[
"Cai",
"Panpan",
""
],
[
"Luo",
"Yuanfu",
""
],
[
"Hsu",
"David",
""
],
[
"Lee",
"Wee Sun",
""
]
] |
1802.06318 | Matheus Nohra Haddad | Matheus Nohra Haddad, Rafael Martinelli, Thibaut Vidal, Luiz Satoru
Ochi, Simone Martins, Marcone Jamilson Freitas Souza, Richard Hartl | Large Neighborhood-Based Metaheuristic and Branch-and-Price for the
Pickup and Delivery Problem with Split Loads | 37 pages, 4 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the multi-vehicle one-to-one pickup and delivery problem with
split loads, a NP-hard problem linked with a variety of applications for bulk
product transportation, bike-sharing systems and inventory re-balancing. This
problem is notoriously difficult due to the interaction of two challenging
vehicle routing attributes, "pickups and deliveries" and "split deliveries".
This possibly leads to optimal solutions of a size that grows exponentially
with the instance size, containing multiple visits per customer pair, even in
the same route. To solve this problem, we propose an iterated local search
metaheuristic as well as a branch-and-price algorithm. The core of the
metaheuristic consists of a new large neighborhood search, which reduces the
problem of finding the best insertion combination of a pickup and delivery pair
into a route (with possible splits) to a resource-constrained shortest path and
knapsack problem. Similarly, the branch-and-price algorithm uses sophisticated
labeling techniques, route relaxations, pre-processing and branching rules for
an efficient resolution. Our computational experiments on classical
single-vehicle instances demonstrate the excellent performance of the
metaheuristic, which produces new best known solutions for 92 out of 93 test
instances, and outperforms all previous algorithms. Experimental results on new
multi-vehicle instances with distance constraints are also reported. The
branch-and-price algorithm produces optimal solutions for instances with up to
20 pickup-and-delivery pairs, and very accurate solutions are found by the
metaheuristic.
| [
{
"version": "v1",
"created": "Sun, 18 Feb 2018 02:09:20 GMT"
}
] | 1,519,084,800,000 | [
[
"Haddad",
"Matheus Nohra",
""
],
[
"Martinelli",
"Rafael",
""
],
[
"Vidal",
"Thibaut",
""
],
[
"Ochi",
"Luiz Satoru",
""
],
[
"Martins",
"Simone",
""
],
[
"Souza",
"Marcone Jamilson Freitas",
""
],
[
"Hartl",
"Richard",
""
]
] |
1802.06588 | Rodrigo Marcos | Rodrigo Marcos, Oliva Garc\'ia-Cant\'u, Ricardo Herranz | A Machine Learning Approach to Air Traffic Route Choice Modelling | Submitted for review to Transportation Research Part C: Emerging
Technologies | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Air Traffic Flow and Capacity Management (ATFCM) is one of the constituent
parts of Air Traffic Management (ATM). The goal of ATFCM is to make airport and
airspace capacity meet traffic demand and, when capacity opportunities are
exhausted, optimise traffic flows to meet the available capacity. One of the
key enablers of ATFCM is the accurate estimation of future traffic demand. The
available information (schedules, flight plans, etc.) and its associated level
of uncertainty differ across the different ATFCM planning phases, leading to
qualitative differences between the types of forecasting that are feasible at
each time horizon. While abundant research has been conducted on tactical
trajectory prediction (i.e., during the day of operations), trajectory
prediction in the pre-tactical phase, when few or no flight plans are
available, has received much less attention. As a consequence, the methods
currently in use for pre-tactical traffic forecast are still rather
rudimentary, often resulting in suboptimal ATFCM decision making. This paper
proposes a machine learning approach for the prediction of airlines route
choices between two airports as a function of route characteristics, such as
flight efficiency, air navigation charges and expected level of congestion.
Different predictive models based on multinomial logistic regression and
decision trees are formulated and calibrated with historical traffic data, and
a critical evaluation of each model is conducted. We analyse the predictive
power of each model in terms of its ability to forecast traffic volumes at the
level of charging zones, proving significant potential to enhance pre-tactical
traffic forecast. We conclude by discussing the limitations and room for
improvement of the proposed approach, as well as the future developments
required to produce reliable traffic forecasts at a higher spatial and temporal
resolution.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 11:25:18 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2018 07:51:14 GMT"
}
] | 1,519,171,200,000 | [
[
"Marcos",
"Rodrigo",
""
],
[
"García-Cantú",
"Oliva",
""
],
[
"Herranz",
"Ricardo",
""
]
] |
1802.06604 | Garrett Andersen | Garrett Andersen, Peter Vrancx, Haitham Bou-Ammar | Learning High-level Representations from Demonstrations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical learning (HL) is key to solving complex sequential decision
problems with long horizons and sparse rewards. It allows learning agents to
break-up large problems into smaller, more manageable subtasks. A common
approach to HL, is to provide the agent with a number of high-level skills that
solve small parts of the overall problem. A major open question, however, is
how to identify a suitable set of reusable skills. We propose a principled
approach that uses human demonstrations to infer a set of subgoals based on
changes in the demonstration dynamics. Using these subgoals, we decompose the
learning problem into an abstract high-level representation and a set of
low-level subtasks. The abstract description captures the overall problem
structure, while subtasks capture desired skills. We demonstrate that we can
jointly optimize over both levels of learning. We show that the resulting
method significantly outperforms previous baselines on two challenging
problems: the Atari 2600 game Montezuma's Revenge, and a simulated robotics
problem moving the ant robot through a maze.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 12:11:16 GMT"
},
{
"version": "v2",
"created": "Tue, 20 Feb 2018 10:09:48 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Feb 2018 17:06:59 GMT"
}
] | 1,519,862,400,000 | [
[
"Andersen",
"Garrett",
""
],
[
"Vrancx",
"Peter",
""
],
[
"Bou-Ammar",
"Haitham",
""
]
] |
1802.06698 | Patrick Bl\"obaum | Patrick Bl\"obaum, Dominik Janzing, Takashi Washio, Shohei Shimizu,
Bernhard Sch\"olkopf | Analysis of cause-effect inference by comparing regression errors | This is an extended version of the AISTATS 2018 paper | PeerJ, 2019 | 10.7717/peerj-cs.169 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the problem of inferring the causal direction between two
variables by comparing the least-squares errors of the predictions in both
possible directions. Under the assumption of an independence between the
function relating cause and effect, the conditional noise distribution, and the
distribution of the cause, we show that the errors are smaller in causal
direction if both variables are equally scaled and the causal relation is close
to deterministic. Based on this, we provide an easily applicable algorithm that
only requires a regression in both possible causal directions and a comparison
of the errors. The performance of the algorithm is compared with various
related causal inference methods in different artificial and real-world data
sets.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 16:50:05 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Jan 2019 18:21:20 GMT"
}
] | 1,548,374,400,000 | [
[
"Blöbaum",
"Patrick",
""
],
[
"Janzing",
"Dominik",
""
],
[
"Washio",
"Takashi",
""
],
[
"Shimizu",
"Shohei",
""
],
[
"Schölkopf",
"Bernhard",
""
]
] |
1802.06767 | Kyrylo Malakhov | A. V. Palagin, N.G. Petrenko, V.Yu. Velychko, K.S. Malakhov | The problem of the development ontology-driven architecture of
intellectual software systems | in Russian; "Bibliography" section updated for correct identification
of references by the Google Scholar parser software; 6 pages; 6 figures | Visnik of the Volodymyr Dahl East ukrainian national university 13
(2011) 179-184 Luhansk | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper describes the architecture of the intelligence system for automated
design of ontological knowledge bases of domain areas and the software model of
the management GUI (Graphical User Interface) subsystem
| [
{
"version": "v1",
"created": "Sat, 17 Feb 2018 10:24:01 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Feb 2018 12:57:27 GMT"
}
] | 1,519,344,000,000 | [
[
"Palagin",
"A. V.",
""
],
[
"Petrenko",
"N. G.",
""
],
[
"Velychko",
"V. Yu.",
""
],
[
"Malakhov",
"K. S.",
""
]
] |
1802.06768 | Kyrylo Malakhov | O. V. Palagin, K. S. Malakhov, V. Yu. Velichko, O. S. Shurov | Design and software implementation of subsystems for creating and using
the ontological base of a research scientist | in Ukrainian; updated "Bibliography" section for correct
identification of references by the Google Scholar parser software; 11 pages;
1 figure | Problems in programming 2 (2017) 72-81 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creation of the information systems and tools for scientific research and
development support has always been one of the central directions of the
development of computer science. The main features of the modern evolution of
scientific research and development are the transdisciplinary approach and the
deep intellectualisation of all stages of the life cycle of formulation and
solution of scientific problems. The theoretical and practical aspects of the
development of perspective complex knowledge-oriented information systems and
their components are considered in the paper. The analysis of existing
scientific information systems (or current research information systems, CRIS)
and synthesis of general principles of design of the research and development
workstation environment of a researcher and its components are carried out in
the work. The functional components of knowledge-oriented information system
research and development workstation environment of a researcher are designed.
Designed and developed functional components of knowledge-oriented information
system developing research and development workstation environment,including
functional models and software implementation of the software subsystem for
creation and use of ontological knowledge base for research fellow
publications, as part of personalized knowledge base of scientific researcher.
Research in modern conditions of e-Science paradigm requires pooling scientific
community and intensive exchange of research results that may be achieved
through the use of scientific information systems. research and development
workstation environment allows to solve problems of contructivisation and
formalisation of knowledge representation, obtained during the research process
and collective accomplices interaction.
| [
{
"version": "v1",
"created": "Sat, 17 Feb 2018 10:46:22 GMT"
},
{
"version": "v2",
"created": "Wed, 21 Feb 2018 11:58:09 GMT"
}
] | 1,519,257,600,000 | [
[
"Palagin",
"O. V.",
""
],
[
"Malakhov",
"K. S.",
""
],
[
"Velichko",
"V. Yu.",
""
],
[
"Shurov",
"O. S.",
""
]
] |
1802.06769 | Kyrylo Malakhov | A. V. Palagin, N. G. Petrenko, K. S. Malakhov | Technique for designing a domain ontology | in Russian | Computer means, networks and systems 10 (2011) 5-12 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article describes the technique for designing a domain ontology, shows
the flowchart of algorithm design and example of constructing a fragment of the
ontology of the subject area of Computer Science is considered.
| [
{
"version": "v1",
"created": "Sat, 17 Feb 2018 10:58:59 GMT"
}
] | 1,519,171,200,000 | [
[
"Palagin",
"A. V.",
""
],
[
"Petrenko",
"N. G.",
""
],
[
"Malakhov",
"K. S.",
""
]
] |
1802.06821 | Kyrylo Malakhov | V. Yu. Velychko, K. S. Malakhov, V. V. Semenkov, A. E. Strizhak | Integrated Tools for Engineering Ontologies | in Russian | Information Models and Analyses 3 (4) (2014) 336-361 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The article presents an overview of current specialized ontology engineering
tools, as well as texts' annotation tools based on ontologies. The main
functions and features of these tools, their advantages and disadvantages are
discussed. A systematic comparative analysis of means for engineering
ontologies is presented.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 19:35:14 GMT"
}
] | 1,519,171,200,000 | [
[
"Velychko",
"V. Yu.",
""
],
[
"Malakhov",
"K. S.",
""
],
[
"Semenkov",
"V. V.",
""
],
[
"Strizhak",
"A. E.",
""
]
] |
1802.06829 | Kyrylo Malakhov | A. V. Palagin, N. G. Petrenko, V. Yu. Velychko, K. S. Malakhov, O. V.
Karun | Principles of design and software development models of
ontological-driven computer systems | in Russian | Problems of Informatization and Management Vol 2 No 34 (2011)
96-101 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes the design principles of methodology of
knowledge-oriented information systems based on ontological approach. Such
systems implement technology subject-oriented extraction of knowledge from the
set of natural language texts and their formal and logical presentation and
application processing
| [
{
"version": "v1",
"created": "Tue, 13 Feb 2018 10:20:44 GMT"
}
] | 1,519,171,200,000 | [
[
"Palagin",
"A. V.",
""
],
[
"Petrenko",
"N. G.",
""
],
[
"Velychko",
"V. Yu.",
""
],
[
"Malakhov",
"K. S.",
""
],
[
"Karun",
"O. V.",
""
]
] |
1802.06866 | Ismail Kayali | Ismail Kayali | Expert System for Diagnosis of Chest Diseases Using Neural Networks | 8 Pages | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | This article represents one of the contemporary trends in the application of
the latest methods of information and communication technology for medicine
through an expert system helps the doctor to diagnose some chest diseases which
is important because of the frequent spread of chest diseases nowadays in
addition to the overlap symptoms of these diseases, which is difficult to right
diagnose by doctors with several algorithms: Forward Chaining, Backward
Chaining, Neural Network(Back Propagation). However, this system cannot replace
the doctor function, but it can help the doctor to avoid wrong diagnosis and
treatments. It can also be developed in such a way to help the novice doctors.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 21:41:32 GMT"
}
] | 1,519,171,200,000 | [
[
"Kayali",
"Ismail",
""
]
] |
1802.06881 | Michael Green | Christoffer Holmg{\aa}rd, Michael Cerny Green, Antonios Liapis, and
Julian Togelius | Automated Playtesting with Procedural Personas through MCTS with Evolved
Heuristics | 10 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a method for generative player modeling and its
application to the automatic testing of game content using archetypal player
models called procedural personas. Theoretically grounded in psychological
decision theory, procedural personas are implemented using a variation of Monte
Carlo Tree Search (MCTS) where the node selection criteria are developed using
evolutionary computation, replacing the standard UCB1 criterion of MCTS. Using
these personas we demonstrate how generative player models can be applied to a
varied corpus of game levels and demonstrate how different play styles can be
enacted in each level. In short, we use artificially intelligent personas to
construct synthetic playtesters. The proposed approach could be used as a tool
for automatic play testing when human feedback is not readily available or when
quick visualization of potential interactions is necessary. Possible
applications include interactive tools during game development or procedural
content generation systems where many evaluations must be conducted within a
short time span.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 22:13:20 GMT"
}
] | 1,519,171,200,000 | [
[
"Holmgård",
"Christoffer",
""
],
[
"Green",
"Michael Cerny",
""
],
[
"Liapis",
"Antonios",
""
],
[
"Togelius",
"Julian",
""
]
] |
1802.06888 | Ignacio Viglizzo | Fernando Tohm\'e and Ignacio Viglizzo | Superrational types | null | null | 10.1093/jigpal/jzz007 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a formal analysis of Douglas Hofstadter's concept of
\emph{superrationality}. We start by defining superrationally justifiable
actions, and study them in symmetric games. We then model the beliefs of the
players, in a way that leads them to different choices than the usual
assumption of rationality by restricting the range of conceivable choices.
These beliefs are captured in the formal notion of \emph{type} drawn from
epistemic game theory. The theory of coalgebras is used to frame type spaces
and to account for the existence of some of them. We find conditions that
guarantee superrational outcomes.
| [
{
"version": "v1",
"created": "Fri, 5 Jan 2018 15:44:38 GMT"
}
] | 1,616,112,000,000 | [
[
"Tohmé",
"Fernando",
""
],
[
"Viglizzo",
"Ignacio",
""
]
] |
1802.06895 | Sarath Sreedharan | Sarath Sreedharan, Siddharth Srivastava, Subbarao Kambhampati | Hierarchical Expertise-Level Modeling for User Specific Robot-Behavior
Explanations | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is a growing interest within the AI research community to develop
autonomous systems capable of explaining their behavior to users. One aspect of
the explanation generation problem that has yet to receive much attention is
the task of explaining plans to users whose level of expertise differ from that
of the explainer. We propose an approach for addressing this problem by
representing the user's model as an abstraction of the domain model that the
planner uses. We present algorithms for generating minimal explanations in
cases where this abstract human model is not known. We reduce the problem of
generating explanation to a search over the space of abstract models and
investigate possible greedy approximations for minimal explanations. We also
empirically show that our approach can efficiently compute explanations for a
variety of problems.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2018 22:35:13 GMT"
}
] | 1,519,171,200,000 | [
[
"Sreedharan",
"Sarath",
""
],
[
"Srivastava",
"Siddharth",
""
],
[
"Kambhampati",
"Subbarao",
""
]
] |
1802.06940 | Alexander Semenov | Irina Gribanova and Alexander Semenov | Using Automatic Generation of Relaxation Constraints to Improve the
Preimage Attack on 39-step MD4 | This paper was submitted to MIPRO 2018 as a conference paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we construct preimage attack on the truncated variant of the
MD4 hash function. Specifically, we study the MD4-39 function defined by the
first 39 steps of the MD4 algorithm. We suggest a new attack on MD4-39, which
develops the ideas proposed by H. Dobbertin in 1998. Namely, the special
relaxation constraints are introduced in order to simplify the equations
corresponding to the problem of finding a preimage for an arbitrary MD4-39 hash
value. The equations supplemented with the relaxation constraints are then
reduced to the Boolean Satisfiability Problem (SAT) and solved using the
state-of-the-art SAT solvers. We show that the effectiveness of a set of
relaxation constraints can be evaluated using the black-box function of a
special kind. Thus, we suggest automatic method of relaxation constraints
generation by applying the black-box optimization to this function. The
proposed method made it possible to find new relaxation constraints that
contribute to a SAT-based preimage attack on MD4-39 which significantly
outperforms the competition.
| [
{
"version": "v1",
"created": "Tue, 20 Feb 2018 02:47:41 GMT"
}
] | 1,570,406,400,000 | [
[
"Gribanova",
"Irina",
""
],
[
"Semenov",
"Alexander",
""
]
] |
1802.07489 | Sylwia Polberg | Anthony Hunter, Sylwia Polberg, Matthias Thimm | Epistemic Graphs for Representing and Reasoning with Positive and
Negative Influences of Arguments | null | null | 10.1016/j.artint.2020.103236 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces epistemic graphs as a generalization of the epistemic
approach to probabilistic argumentation. In these graphs, an argument can be
believed or disbelieved up to a given degree, thus providing a more
fine--grained alternative to the standard Dung's approaches when it comes to
determining the status of a given argument. Furthermore, the flexibility of the
epistemic approach allows us to both model the rationale behind the existing
semantics as well as completely deviate from them when required. Epistemic
graphs can model both attack and support as well as relations that are neither
support nor attack. The way other arguments influence a given argument is
expressed by the epistemic constraints that can restrict the belief we have in
an argument with a varying degree of specificity. The fact that we can specify
the rules under which arguments should be evaluated and we can include
constraints between unrelated arguments permits the framework to be more
context--sensitive. It also allows for better modelling of imperfect agents,
which can be important in multi--agent applications.
| [
{
"version": "v1",
"created": "Wed, 21 Feb 2018 10:05:49 GMT"
},
{
"version": "v2",
"created": "Tue, 14 Jan 2020 11:45:14 GMT"
}
] | 1,579,046,400,000 | [
[
"Hunter",
"Anthony",
""
],
[
"Polberg",
"Sylwia",
""
],
[
"Thimm",
"Matthias",
""
]
] |
1802.07740 | Neil Rabinowitz | Neil C. Rabinowitz, Frank Perbet, H. Francis Song, Chiyuan Zhang, S.M.
Ali Eslami, Matthew Botvinick | Machine Theory of Mind | 21 pages, 15 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Theory of mind (ToM; Premack & Woodruff, 1978) broadly refers to humans'
ability to represent the mental states of others, including their desires,
beliefs, and intentions. We propose to train a machine to build such models
too. We design a Theory of Mind neural network -- a ToMnet -- which uses
meta-learning to build models of the agents it encounters, from observations of
their behaviour alone. Through this process, it acquires a strong prior model
for agents' behaviour, as well as the ability to bootstrap to richer
predictions about agents' characteristics and mental states using only a small
number of behavioural observations. We apply the ToMnet to agents behaving in
simple gridworld environments, showing that it learns to model random,
algorithmic, and deep reinforcement learning agents from varied populations,
and that it passes classic ToM tasks such as the "Sally-Anne" test (Wimmer &
Perner, 1983; Baron-Cohen et al., 1985) of recognising that others can hold
false beliefs about the world. We argue that this system -- which autonomously
learns how to model other agents in its world -- is an important step forward
for developing multi-agent AI systems, for building intermediating technology
for machine-human interaction, and for advancing the progress on interpretable
AI.
| [
{
"version": "v1",
"created": "Wed, 21 Feb 2018 19:00:10 GMT"
},
{
"version": "v2",
"created": "Mon, 12 Mar 2018 21:37:03 GMT"
}
] | 1,520,985,600,000 | [
[
"Rabinowitz",
"Neil C.",
""
],
[
"Perbet",
"Frank",
""
],
[
"Song",
"H. Francis",
""
],
[
"Zhang",
"Chiyuan",
""
],
[
"Eslami",
"S. M. Ali",
""
],
[
"Botvinick",
"Matthew",
""
]
] |
1802.07842 | Hamid Maei | Hamid Reza Maei | Convergent Actor-Critic Algorithms Under Off-Policy Training and
Function Approximation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the first class of policy-gradient algorithms that work with both
state-value and policy function-approximation, and are guaranteed to converge
under off-policy training. Our solution targets problems in reinforcement
learning where the action representation adds to the-curse-of-dimensionality;
that is, with continuous or large action sets, thus making it infeasible to
estimate state-action value functions (Q functions). Using state-value
functions helps to lift the curse and as a result naturally turn our
policy-gradient solution into classical Actor-Critic architecture whose Actor
uses state-value function for the update. Our algorithms, Gradient Actor-Critic
and Emphatic Actor-Critic, are derived based on the exact gradient of averaged
state-value function objective and thus are guaranteed to converge to its
optimal solution, while maintaining all the desirable properties of classical
Actor-Critic methods with no additional hyper-parameters. To our knowledge,
this is the first time that convergent off-policy learning methods have been
extended to classical Actor-Critic methods with function approximation.
| [
{
"version": "v1",
"created": "Wed, 21 Feb 2018 23:14:44 GMT"
}
] | 1,519,344,000,000 | [
[
"Maei",
"Hamid Reza",
""
]
] |
1802.08201 | Umberto Straccia | Giovanni Casini and Umberto Straccia and Thomas Meyer | A Polynomial Time Subsumption Algorithm for Nominal Safe
$\mathcal{ELO}_\bot$ under Rational Closure | null | null | 10.1016/j.ins.2018.09.037 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Description Logics (DLs) under Rational Closure (RC) is a well-known
framework for non-monotonic reasoning in DLs. In this paper, we address the
concept subsumption decision problem under RC for nominal safe
$\mathcal{ELO}_\bot$, a notable and practically important DL representative of
the OWL 2 profile OWL 2 EL.
Our contribution here is to define a polynomial time subsumption procedure
for nominal safe $\mathcal{ELO}_\bot$ under RC that relies entirely on a series
of classical, monotonic $\mathcal{EL}_\bot$ subsumption tests. Therefore, any
existing classical monotonic $\mathcal{EL}_\bot$ reasoner can be used as a
black box to implement our method. We then also adapt the method to one of the
known extensions of RC for DLs, namely Defeasible Inheritance-based DLs without
losing the computational tractability.
| [
{
"version": "v1",
"created": "Thu, 22 Feb 2018 17:54:00 GMT"
},
{
"version": "v2",
"created": "Fri, 28 Sep 2018 06:34:08 GMT"
}
] | 1,538,352,000,000 | [
[
"Casini",
"Giovanni",
""
],
[
"Straccia",
"Umberto",
""
],
[
"Meyer",
"Thomas",
""
]
] |
1802.08328 | Carlo Taticchi | Stefano Bistarelli, Francesco Santini, Carlo Taticchi | On Looking for Local Expansion Invariants in Argumentation Semantics: a
Preliminary Report | null | Proceedings of the Thirty-First International Florida Artificial
Intelligence Research Society Conference, {FLAIRS} 2018, Melbourne, Florida,
{USA.} May 21-23 2018. Pages 537--540 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study invariant local expansion operators for conflict-free and admissible
sets in Abstract Argumentation Frameworks (AFs). Such operators are directly
applied on AFs, and are invariant with respect to a chosen "semantics" (that is
w.r.t. each of the conflict free/admissible set of arguments). Accordingly, we
derive a definition of robustness for AFs in terms of the number of times such
operators can be applied without producing any change in the chosen semantics.
| [
{
"version": "v1",
"created": "Thu, 22 Feb 2018 22:18:53 GMT"
},
{
"version": "v2",
"created": "Mon, 30 Jul 2018 13:18:15 GMT"
}
] | 1,533,081,600,000 | [
[
"Bistarelli",
"Stefano",
""
],
[
"Santini",
"Francesco",
""
],
[
"Taticchi",
"Carlo",
""
]
] |
1802.08365 | Xun Yang | Di Wu, Xiujun Chen, Xun Yang, Hao Wang, Qing Tan, Xiaoxun Zhang, Jian
Xu, Kun Gai | Budget Constrained Bidding by Model-free Reinforcement Learning in
Display Advertising | In The 27th ACM International Conference on Information and Knowledge
Management (CIKM 18), October 22-26, 2018, Torino, Italy. ACM, New York, NY,
USA, 9 pages | null | 10.1145/3269206.3271748 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-time bidding (RTB) is an important mechanism in online display
advertising, where a proper bid for each page view plays an essential role for
good marketing results. Budget constrained bidding is a typical scenario in RTB
where the advertisers hope to maximize the total value of the winning
impressions under a pre-set budget constraint. However, the optimal bidding
strategy is hard to be derived due to the complexity and volatility of the
auction environment. To address these challenges, in this paper, we formulate
budget constrained bidding as a Markov Decision Process and propose a
model-free reinforcement learning framework to resolve the optimization
problem. Our analysis shows that the immediate reward from environment is
misleading under a critical resource constraint. Therefore, we innovate a
reward function design methodology for the reinforcement learning problems with
constraints. Based on the new reward design, we employ a deep neural network to
learn the appropriate reward so that the optimal policy can be learned
effectively. Different from the prior model-based work, which suffers from the
scalability problem, our framework is easy to be deployed in large-scale
industrial applications. The experimental evaluations demonstrate the
effectiveness of our framework on large-scale real datasets.
| [
{
"version": "v1",
"created": "Fri, 23 Feb 2018 02:29:06 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Feb 2018 05:10:15 GMT"
},
{
"version": "v3",
"created": "Tue, 7 Aug 2018 05:15:08 GMT"
},
{
"version": "v4",
"created": "Wed, 8 Aug 2018 07:44:56 GMT"
},
{
"version": "v5",
"created": "Fri, 7 Sep 2018 03:05:00 GMT"
},
{
"version": "v6",
"created": "Tue, 23 Oct 2018 15:20:56 GMT"
}
] | 1,540,339,200,000 | [
[
"Wu",
"Di",
""
],
[
"Chen",
"Xiujun",
""
],
[
"Yang",
"Xun",
""
],
[
"Wang",
"Hao",
""
],
[
"Tan",
"Qing",
""
],
[
"Zhang",
"Xiaoxun",
""
],
[
"Xu",
"Jian",
""
],
[
"Gai",
"Kun",
""
]
] |
1802.08445 | Carlo Taticchi | Stefano Bistarelli, Alessandra Tappini, Carlo Taticchi | A Matrix Approach for Weighted Argumentation Frameworks: a Preliminary
Report | null | A Matrix Approach for Weighted Argumentation Frameworks. FLAIRS
Conference 2018: 507-512 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The assignment of weights to attacks in a classical Argumentation Framework
allows to compute semantics by taking into account the different importance of
each argument. We represent a Weighted Argumentation Framework by a non-binary
matrix, and we characterize the basic extensions (such as w-admissible, w-
stable, w-complete) by analysing sub-blocks of this matrix. Also, we show how
to reduce the matrix into another one of smaller size, that is equivalent to
the original one for the determination of extensions. Furthermore, we provide
two algorithms that allow to build incrementally w-grounded and w-preferred
extensions starting from a w-admissible extension.
| [
{
"version": "v1",
"created": "Fri, 23 Feb 2018 09:00:09 GMT"
}
] | 1,538,611,200,000 | [
[
"Bistarelli",
"Stefano",
""
],
[
"Tappini",
"Alessandra",
""
],
[
"Taticchi",
"Carlo",
""
]
] |
1802.08540 | Suttinee Sawadsitang | Suttinee Sawadsitang, Rakpong Kaewpuang, Siwei Jiang, Dusit Niyato,
Ping Wang | Optimal Stochastic Delivery Planning in Full-Truckload and
Less-Than-Truckload Delivery | 5 pages, 6 figures, Vehicular Technology Conference (VTC Spring),
2017 IEEE 85th | null | 10.1109/VTCSpring.2017.8108576 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With an increasing demand from emerging logistics businesses, Vehicle Routing
Problem with Private fleet and common Carrier (VRPPC) has been introduced to
manage package delivery services from a supplier to customers. However, almost
all of existing studies focus on the deterministic problem that assumes all
parameters are known perfectly at the time when the planning and routing
decisions are made. In reality, some parameters are random and unknown.
Therefore, in this paper, we consider VRPPC with hard time windows and random
demand, called Optimal Delivery Planning (ODP). The proposed ODP aims to
minimize the total package delivery cost while meeting the customer time window
constraints. We use stochastic integer programming to formulate the
optimization problem incorporating the customer demand uncertainty. Moreover,
we evaluate the performance of the ODP using test data from benchmark dataset
and from actual Singapore road map.
| [
{
"version": "v1",
"created": "Sun, 4 Feb 2018 08:45:19 GMT"
}
] | 1,519,603,200,000 | [
[
"Sawadsitang",
"Suttinee",
""
],
[
"Kaewpuang",
"Rakpong",
""
],
[
"Jiang",
"Siwei",
""
],
[
"Niyato",
"Dusit",
""
],
[
"Wang",
"Ping",
""
]
] |
1802.08554 | Douglas Summers Stay | Douglas Summers Stay | Semantic Vector Spaces for Broadening Consideration of Consequences | A book chapter from Autonomy and Artificial Intelligence: A Threat or
Savior? | Autonomy and Artificial Intelligence: A Threat or Savior? Editors
W.F. Lawless, Ranjeev Mittu, Donald Sofge, Stephen Russell Springer, 2017 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reasoning systems with too simple a model of the world and human intent are
unable to consider potential negative side effects of their actions and modify
their plans to avoid them (e.g., avoiding potential errors). However,
hand-encoding the enormous and subtle body of facts that constitutes common
sense into a knowledge base has proved too difficult despite decades of work.
Distributed semantic vector spaces learned from large text corpora, on the
other hand, can learn representations that capture shades of meaning of
common-sense concepts and perform analogical and associational reasoning in
ways that knowledge bases are too rigid to perform, by encoding concepts and
the relations between them as geometric structures. These have, however, the
disadvantage of being unreliable, poorly understood, and biased in their view
of the world by the source material. This chapter will discuss how these
approaches may be combined in a way that combines the best properties of each
for understanding the world and human intentions in a richer way.
| [
{
"version": "v1",
"created": "Fri, 23 Feb 2018 14:41:33 GMT"
}
] | 1,519,603,200,000 | [
[
"Stay",
"Douglas Summers",
""
]
] |
1802.08802 | Evan Liu | Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, Percy
Liang | Reinforcement Learning on Web Interfaces Using Workflow-Guided
Exploration | International Conference on Learning Representations (ICLR), 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) agents improve through trial-and-error, but when
reward is sparse and the agent cannot discover successful action sequences,
learning stagnates. This has been a notable problem in training deep RL agents
to perform web-based tasks, such as booking flights or replying to emails,
where a single mistake can ruin the entire sequence of actions. A common remedy
is to "warm-start" the agent by pre-training it to mimic expert demonstrations,
but this is prone to overfitting. Instead, we propose to constrain exploration
using demonstrations. From each demonstration, we induce high-level "workflows"
which constrain the allowable actions at each time step to be similar to those
in the demonstration (e.g., "Step 1: click on a textbox; Step 2: enter some
text"). Our exploration policy then learns to identify successful workflows and
samples actions that satisfy these workflows. Workflows prune out bad
exploration directions and accelerate the agent's ability to discover rewards.
We use our approach to train a novel neural policy designed to handle the
semi-structured nature of websites, and evaluate on a suite of web tasks,
including the recent World of Bits benchmark. We achieve new state-of-the-art
results, and show that workflow-guided exploration improves sample efficiency
over behavioral cloning by more than 100x.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2018 05:32:47 GMT"
}
] | 1,519,689,600,000 | [
[
"Liu",
"Evan Zheran",
""
],
[
"Guu",
"Kelvin",
""
],
[
"Pasupat",
"Panupong",
""
],
[
"Shi",
"Tianlin",
""
],
[
"Liang",
"Percy",
""
]
] |
1802.08822 | Chang-Shing Lee | Chang-Shing Lee, Mei-Hui Wang, Chi-Shiang Wang, Olivier Teytaud,
Jialin Liu, Su-Wei Lin, and Pi-Hsia Hung | PSO-based Fuzzy Markup Language for Student Learning Performance
Evaluation and Educational Application | This paper is accepted in Feb. 2018 which will be published in IEEE
Transactions on Fuzzy Systems | null | 10.1109/TFUZZ.2018.2810814 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an agent with particle swarm optimization (PSO) based on
a Fuzzy Markup Language (FML) for students learning performance evaluation and
educational applications, and the proposed agent is according to the response
data from a conventional test and an item response theory. First, we apply a
GS-based parameter estimation mechanism to estimate the items parameters
according to the response data, and then to compare its results with those of
an IRT-based Bayesian parameter estimation mechanism. In addition, we propose a
static-IRT test assembly mechanism to assemble a form for the conventional
test. The presented FML-based dynamic assessment mechanism infers the
probability of making a correct response to the item for a student with various
abilities. Moreover, this paper also proposes a novel PFML learning mechanism
for optimizing the parameters between items and students. Finally, we adopt a
K-fold cross validation mechanism to evaluate the performance of the proposed
agent. Experimental results show that the novel PFML learning mechanism for the
parameter estimation and learning optimization performs favorably. We believe
the proposed PFML will be a reference for education research and pedagogy and
an important co-learning mechanism for future human-machine educational
applications.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2018 09:26:46 GMT"
}
] | 1,555,286,400,000 | [
[
"Lee",
"Chang-Shing",
""
],
[
"Wang",
"Mei-Hui",
""
],
[
"Wang",
"Chi-Shiang",
""
],
[
"Teytaud",
"Olivier",
""
],
[
"Liu",
"Jialin",
""
],
[
"Lin",
"Su-Wei",
""
],
[
"Hung",
"Pi-Hsia",
""
]
] |
1802.08864 | Juergen Schmidhuber | Juergen Schmidhuber | One Big Net For Everything | 17 pages, 107 references | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I apply recent work on "learning to think" (2015) and on PowerPlay (2011) to
the incremental training of an increasingly general problem solver, continually
learning to solve new tasks without forgetting previous skills. The problem
solver is a single recurrent neural network (or similar general purpose
computer) called ONE. ONE is unusual in the sense that it is trained in various
ways, e.g., by black box optimization / reinforcement learning / artificial
evolution as well as supervised / unsupervised learning. For example, ONE may
learn through neuroevolution to control a robot through environment-changing
actions, and learn through unsupervised gradient descent to predict future
inputs and vector-valued reward signals as suggested in 1990. User-given tasks
can be defined through extra goal-defining input patterns, also proposed in
1990. Suppose ONE has already learned many skills. Now a copy of ONE can be
re-trained to learn a new skill, e.g., through neuroevolution without a
teacher. Here it may profit from re-using previously learned subroutines, but
it may also forget previous skills. Then ONE is retrained in PowerPlay style
(2011) on stored input/output traces of (a) ONE's copy executing the new skill
and (b) previous instances of ONE whose skills are still considered worth
memorizing. Simultaneously, ONE is retrained on old traces (even those of
unsuccessful trials) to become a better predictor, without additional expensive
interaction with the enviroment. More and more control and prediction skills
are thus collapsed into ONE, like in the chunker-automatizer system of the
neural history compressor (1991). This forces ONE to relate partially analogous
skills (with shared algorithmic information) to each other, creating common
subroutines in form of shared subnetworks of ONE, to greatly speed up
subsequent learning of additional, novel but algorithmically related skills.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2018 15:23:46 GMT"
}
] | 1,519,689,600,000 | [
[
"Schmidhuber",
"Juergen",
""
]
] |
1802.09119 | Ruggiero Lovreglio | Ruggiero Lovreglio, Vicente Gonzalez, Zhenan Feng, Robert Amor,
Michael Spearpoint, Jared Thomas, Margaret Trotter, Rafael Sacks | Prototyping Virtual Reality Serious Games for Building Earthquake
Preparedness: The Auckland City Hospital Case Study | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enhancing evacuee safety is a key factor in reducing the number of injuries
and deaths that result from earthquakes. One way this can be achieved is by
training occupants. Virtual Reality (VR) and Serious Games (SGs), represent
novel techniques that may overcome the limitations of traditional training
approaches. VR and SGs have been examined in the fire emergency context,
however, their application to earthquake preparedness has not yet been
extensively examined. We provide a theoretical discussion of the advantages and
limitations of using VR SGs to investigate how building occupants behave during
earthquake evacuations and to train building occupants to cope with such
emergencies. We explore key design components for developing a VR SG framework:
(a) what features constitute an earthquake event, (b) which building types can
be selected and represented within the VR environment, (c) how damage to the
building can be determined and represented, (d) how non-player characters (NPC)
can be designed, and (e) what level of interaction there can be between NPC and
the human participants. We illustrate the above by presenting the Auckland City
Hospital, New Zealand as a case study, and propose a possible VR SG training
tool to enhance earthquake preparedness in public buildings.
| [
{
"version": "v1",
"created": "Mon, 26 Feb 2018 01:08:51 GMT"
}
] | 1,519,689,600,000 | [
[
"Lovreglio",
"Ruggiero",
""
],
[
"Gonzalez",
"Vicente",
""
],
[
"Feng",
"Zhenan",
""
],
[
"Amor",
"Robert",
""
],
[
"Spearpoint",
"Michael",
""
],
[
"Thomas",
"Jared",
""
],
[
"Trotter",
"Margaret",
""
],
[
"Sacks",
"Rafael",
""
]
] |
1802.09159 | Ramamurthy Badrinath | Anusha Mujumdar, Swarup Kumar Mohalik, Ramamurthy Badrinath | Antifragility for Intelligent Autonomous Systems | Under Review. Consists of seven pages and four figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antifragile systems grow measurably better in the presence of hazards. This
is in contrast to fragile systems which break down in the presence of hazards,
robust systems that tolerate hazards up to a certain degree, and resilient
systems that -- like self-healing systems -- revert to their earlier expected
behavior after a period of convalescence. The notion of antifragility was
introduced by Taleb for economics systems, but its applicability has been
illustrated in biological and engineering domains as well. In this paper, we
propose an architecture that imparts antifragility to intelligent autonomous
systems, specifically those that are goal-driven and based on AI-planning. We
argue that this architecture allows the system to self-improve by uncovering
new capabilities obtained either through the hazards themselves (opportunistic)
or through deliberation (strategic). An AI planning-based case study of an
autonomous wheeled robot is presented. We show that with the proposed
architecture, the robot develops antifragile behaviour with respect to an oil
spill hazard.
| [
{
"version": "v1",
"created": "Mon, 26 Feb 2018 04:58:55 GMT"
}
] | 1,519,689,600,000 | [
[
"Mujumdar",
"Anusha",
""
],
[
"Mohalik",
"Swarup Kumar",
""
],
[
"Badrinath",
"Ramamurthy",
""
]
] |
1802.09669 | George Leu | George Leu and Hussein Abbass | A Multi-Disciplinary Review of Knowledge Acquisition Methods: From Human
to Autonomous Eliciting Agents | null | Knowledge-Based Systems, Volume 105, Elsevier, 2016 | 10.1016/j.knosys.2016.02.012 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper offers a multi-disciplinary review of knowledge acquisition
methods in human activity systems. The review captures the degree of
involvement of various types of agencies in the knowledge acquisition process,
and proposes a classification with three categories of methods: the human
agent, the human-inspired agent, and the autonomous machine agent methods. In
the first two categories, the acquisition of knowledge is seen as a cognitive
task analysis exercise, while in the third category knowledge acquisition is
treated as an autonomous knowledge-discovery endeavour. The motivation for this
classification stems from the continuous change over time of the structure,
meaning and purpose of human activity systems, which are seen as the factor
that fuelled researchers' and practitioners' efforts in knowledge acquisition
for more than a century.
We show through this review that the KA field is increasingly active due to
the higher and higher pace of change in human activity, and conclude by
discussing the emergence of a fourth category of knowledge acquisition methods,
which are based on red-teaming and co-evolution.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2018 01:21:46 GMT"
}
] | 1,519,776,000,000 | [
[
"Leu",
"George",
""
],
[
"Abbass",
"Hussein",
""
]
] |
1802.09810 | Nils Jansen | Steven Carr, Nils Jansen, Ralf Wimmer, Jie Fu, Ufuk Topcu | Human-in-the-Loop Synthesis for Partially Observable Markov Decision
Processes | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study planning problems where autonomous agents operate inside
environments that are subject to uncertainties and not fully observable.
Partially observable Markov decision processes (POMDPs) are a natural formal
model to capture such problems. Because of the potentially huge or even
infinite belief space in POMDPs, synthesis with safety guarantees is, in
general, computationally intractable. We propose an approach that aims to
circumvent this difficulty: in scenarios that can be partially or fully
simulated in a virtual environment, we actively integrate a human user to
control an agent. While the user repeatedly tries to safely guide the agent in
the simulation, we collect data from the human input. Via behavior cloning, we
translate the data into a strategy for the POMDP. The strategy resolves all
nondeterminism and non-observability of the POMDP, resulting in a discrete-time
Markov chain (MC). The efficient verification of this MC gives quantitative
insights into the quality of the inferred human strategy by proving or
disproving given system specifications. For the case that the quality of the
strategy is not sufficient, we propose a refinement method using
counterexamples presented to the human. Experiments show that by including
humans into the POMDP verification loop we improve the state of the art by
orders of magnitude in terms of scalability.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2018 10:29:56 GMT"
}
] | 1,519,776,000,000 | [
[
"Carr",
"Steven",
""
],
[
"Jansen",
"Nils",
""
],
[
"Wimmer",
"Ralf",
""
],
[
"Fu",
"Jie",
""
],
[
"Topcu",
"Ufuk",
""
]
] |
1802.09924 | J. G. Wolff | J Gerard Wolff | Introduction to the SP theory of intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article provides a brief introduction to the "Theory of Intelligence"
and its realisation in the "SP Computer Model". The overall goal of the SP
programme of research, in accordance with long-established principles in
science, has been the simplification and integration of observations and
concepts across artificial intelligence, mainstream computing, mathematics, and
human learning, perception, and cognition. In broad terms, the SP system is a
brain-like system that takes in "New" information through its senses and stores
some or all of it as "Old" information. A central idea in the system is the
powerful concept of "SP-multiple-alignment", borrowed and adapted from
bioinformatics. This the key to the system's versatility in aspects of
intelligence, in the representation of diverse kinds of knowledge, and in the
seamless integration of diverse aspects of intelligence and diverse kinds of
knowledge, in any combination. There are many potential benefits and
applications of the SP system. It is envisaged that the system will be
developed as the "SP Machine", which will initially be a software virtual
machine, hosted on a high-performance computer, a vehicle for further research
and a step towards the development of an industrial-strength SP Machine.
| [
{
"version": "v1",
"created": "Sat, 24 Feb 2018 17:25:43 GMT"
}
] | 1,519,776,000,000 | [
[
"Wolff",
"J Gerard",
""
]
] |
1802.10054 | Anthony Hunter | Lisa Chalaguine and Emmanuel Hadoux and Fiona Hamilton and Andrew
Hayward and Anthony Hunter and Sylwia Polberg and Henry W. W. Potts | Domain Modelling in Computational Persuasion for Behaviour Change in
Healthcare | 32 pages, 9 figures, draft journal paper | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of behaviour change is to help people to change aspects of their
behaviour for the better (e.g., to decrease calorie intake, to drink in
moderation, to take more exercise, to complete a course of antibiotics once
started, etc.). In current persuasion technology for behaviour change, the
emphasis is on helping people to explore their issues (e.g., through
questionnaires or game playing) or to remember to follow a behaviour change
plan (e.g., diaries and email reminders). However, recent developments in
computational persuasion are leading to an argument-centric approach to
persuasion that can potentially be harnessed in behaviour change applications.
In this paper, we review developments in computational persuasion, and then
focus on domain modelling as a key component. We present a multi-dimensional
approach to domain modelling. At the core of this proposal is an ontology which
provides a representation of key factors, in particular kinds of belief, which
we have identified in the behaviour change literature as being important in
diverse behaviour change initiatives. Our proposal for domain modelling is
intended to facilitate the acquisition and representation of the arguments that
can be used in persuasion dialogues, together with meta-level information about
them which can be used by the persuader to make strategic choices of argument
to present.
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2018 18:13:57 GMT"
}
] | 1,519,776,000,000 | [
[
"Chalaguine",
"Lisa",
""
],
[
"Hadoux",
"Emmanuel",
""
],
[
"Hamilton",
"Fiona",
""
],
[
"Hayward",
"Andrew",
""
],
[
"Hunter",
"Anthony",
""
],
[
"Polberg",
"Sylwia",
""
],
[
"Potts",
"Henry W. W.",
""
]
] |
1802.10269 | Akansel Cosgun | David Isele, Akansel Cosgun | Selective Experience Replay for Lifelong Learning | Presented in 32nd Conference on Artificial Intelligence (AAAI 2018) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep reinforcement learning has emerged as a powerful tool for a variety of
learning tasks, however deep nets typically exhibit forgetting when learning
multiple tasks in sequence. To mitigate forgetting, we propose an experience
replay process that augments the standard FIFO buffer and selectively stores
experiences in a long-term memory. We explore four strategies for selecting
which experiences will be stored: favoring surprise, favoring reward, matching
the global training distribution, and maximizing coverage of the state space.
We show that distribution matching successfully prevents catastrophic
forgetting, and is consistently the best approach on all domains tested. While
distribution matching has better and more consistent performance, we identify
one case in which coverage maximization is beneficial - when tasks that receive
less trained are more important. Overall, our results show that selective
experience replay, when suitable selection algorithms are employed, can prevent
catastrophic forgetting.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2018 06:02:31 GMT"
}
] | 1,519,862,400,000 | [
[
"Isele",
"David",
""
],
[
"Cosgun",
"Akansel",
""
]
] |
1802.10363 | Jialin Liu Ph.D | Diego Perez-Liebana, Jialin Liu, Ahmed Khalifa, Raluca D. Gaina,
Julian Togelius, Simon M. Lucas | General Video Game AI: a Multi-Track Framework for Evaluating Agents,
Games and Content Generation Algorithms | 20 pages, 1 figure, accepted by IEEE ToG | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | General Video Game Playing (GVGP) aims at designing an agent that is capable
of playing multiple video games with no human intervention. In 2014, The
General Video Game AI (GVGAI) competition framework was created and released
with the purpose of providing researchers a common open-source and easy to use
platform for testing their AI methods with potentially infinity of games
created using Video Game Description Language (VGDL). The framework has been
expanded into several tracks during the last few years to meet the demand of
different research directions. The agents are required either to play multiple
unknown games with or without access to game simulations, or to design new game
levels or rules. This survey paper presents the VGDL, the GVGAI framework,
existing tracks, and reviews the wide use of GVGAI framework in research,
education and competitions five years after its birth. A future plan of
framework improvements is also described.
| [
{
"version": "v1",
"created": "Wed, 28 Feb 2018 11:23:16 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Jul 2018 11:56:03 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Dec 2018 01:34:07 GMT"
},
{
"version": "v4",
"created": "Fri, 22 Feb 2019 10:05:44 GMT"
}
] | 1,551,052,800,000 | [
[
"Perez-Liebana",
"Diego",
""
],
[
"Liu",
"Jialin",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Gaina",
"Raluca D.",
""
],
[
"Togelius",
"Julian",
""
],
[
"Lucas",
"Simon M.",
""
]
] |
1803.00259 | Jun Zhao | Jun Zhao, Guang Qiu, Ziyu Guan, Wei Zhao, Xiaofei He | Deep Reinforcement Learning for Sponsored Search Real-time Bidding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bidding optimization is one of the most critical problems in online
advertising. Sponsored search (SS) auction, due to the randomness of user query
behavior and platform nature, usually adopts keyword-level bidding strategies.
In contrast, the display advertising (DA), as a relatively simpler scenario for
auction, has taken advantage of real-time bidding (RTB) to boost the
performance for advertisers. In this paper, we consider the RTB problem in
sponsored search auction, named SS-RTB. SS-RTB has a much more complex dynamic
environment, due to stochastic user query behavior and more complex bidding
policies based on multiple keywords of an ad. Most previous methods for DA
cannot be applied. We propose a reinforcement learning (RL) solution for
handling the complex dynamic environment. Although some RL methods have been
proposed for online advertising, they all fail to address the "environment
changing" problem: the state transition probabilities vary between two days.
Motivated by the observation that auction sequences of two days share similar
transition patterns at a proper aggregation level, we formulate a robust MDP
model at hour-aggregation level of the auction data and propose a
control-by-model framework for SS-RTB. Rather than generating bid prices
directly, we decide a bidding model for impressions of each hour and perform
real-time bidding accordingly. We also extend the method to handle the
multi-agent problem. We deployed the SS-RTB system in the e-commerce search
auction platform of Alibaba. Empirical experiments of offline evaluation and
online A/B test demonstrate the effectiveness of our method.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2018 09:04:37 GMT"
}
] | 1,519,948,800,000 | [
[
"Zhao",
"Jun",
""
],
[
"Qiu",
"Guang",
""
],
[
"Guan",
"Ziyu",
""
],
[
"Zhao",
"Wei",
""
],
[
"He",
"Xiaofei",
""
]
] |
1803.00512 | Adam Lerer | Amy Zhang, Adam Lerer, Sainbayar Sukhbaatar, Rob Fergus, Arthur Szlam | Composable Planning with Attributes | null | International Conference on Machine Learning, 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tasks that an agent will need to solve often are not known during
training. However, if the agent knows which properties of the environment are
important then, after learning how its actions affect those properties, it may
be able to use this knowledge to solve complex tasks without training
specifically for them. Towards this end, we consider a setup in which an
environment is augmented with a set of user defined attributes that
parameterize the features of interest. We propose a method that learns a policy
for transitioning between "nearby" sets of attributes, and maintains a graph of
possible transitions. Given a task at test time that can be expressed in terms
of a target set of attributes, and a current state, our model infers the
attributes of the current state and searches over paths through attribute space
to get a high level plan, and then uses its low level policy to execute the
plan. We show in 3D block stacking, grid-world games, and StarCraft that our
model is able to generalize to longer, more complex tasks at test time by
composing simpler learned policies.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2018 17:21:03 GMT"
},
{
"version": "v2",
"created": "Thu, 25 Apr 2019 20:14:27 GMT"
}
] | 1,556,496,000,000 | [
[
"Zhang",
"Amy",
""
],
[
"Lerer",
"Adam",
""
],
[
"Sukhbaatar",
"Sainbayar",
""
],
[
"Fergus",
"Rob",
""
],
[
"Szlam",
"Arthur",
""
]
] |
1803.00612 | Yang Yu | Yang Yu, Kazi Saidul Hasan, Mo Yu, Wei Zhang, Zhiguo Wang | Knowledge Base Relation Detection via Multi-View Matching | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relation detection is a core component for Knowledge Base Question Answering
(KBQA). In this paper, we propose a KB relation detection model via multi-view
matching which utilizes more useful information extracted from question and KB.
The matching inside each view is through multiple perspectives to compare two
input texts thoroughly. All these components are designed in an end-to-end
trainable neural network model. Experiments on SimpleQuestions and WebQSP yield
state-of-the-art results.
| [
{
"version": "v1",
"created": "Thu, 1 Mar 2018 20:17:02 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Apr 2018 14:19:18 GMT"
}
] | 1,523,318,400,000 | [
[
"Yu",
"Yang",
""
],
[
"Hasan",
"Kazi Saidul",
""
],
[
"Yu",
"Mo",
""
],
[
"Zhang",
"Wei",
""
],
[
"Wang",
"Zhiguo",
""
]
] |
1803.00874 | Azlan Iqbal | Azlan Iqbal | Estimating Total Search Space Size for Specific Piece Sets in Chess | 3 Pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Automatic chess problem or puzzle composition typically involves generating
and testing various different positions, sometimes using particular piece sets.
Once a position has been generated, it is then usually tested for positional
legality based on the game rules. However, it is useful to be able to estimate
what the search space size for particular piece combinations is to begin with.
So if a desirable chess problem was successfully generated by examining
'merely' 100,000 or so positions in a theoretical search space of about 100
billion, this would imply the composing approach used was quite viable and
perhaps even impressive. In this article, I explain a method of calculating the
size of this search space using a combinatorics and permutations approach.
While the mathematics itself may already be established, a precise method and
justification of applying it with regard to the chessboard and chess pieces has
not been documented, to the best of our knowledge. Additionally, the method
could serve as a useful starting point for further estimations of search space
size which filter out positions for legality and rotation, depending on how the
automatic composer is allowed to place pieces on the board (because this
affects its total search space size).
| [
{
"version": "v1",
"created": "Tue, 27 Feb 2018 07:35:55 GMT"
}
] | 1,520,208,000,000 | [
[
"Iqbal",
"Azlan",
""
]
] |
1803.01044 | Raunak Bhattacharyya | Raunak P. Bhattacharyya, Derek J. Phillips, Blake Wulfe, Jeremy
Morton, Alex Kuefler, Mykel J. Kochenderfer | Multi-Agent Imitation Learning for Driving Simulation | 6 pages, 3 figures, 1 table | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simulation is an appealing option for validating the safety of autonomous
vehicles. Generative Adversarial Imitation Learning (GAIL) has recently been
shown to learn representative human driver models. These human driver models
were learned through training in single-agent environments, but they have
difficulty in generalizing to multi-agent driving scenarios. We argue these
difficulties arise because observations at training and test time are sampled
from different distributions. This difference makes such models unsuitable for
the simulation of driving scenes, where multiple agents must interact
realistically over long time horizons. We extend GAIL to address these
shortcomings through a parameter-sharing approach grounded in curriculum
learning. Compared with single-agent GAIL policies, policies generated by our
PS-GAIL method prove superior at interacting stably in a multi-agent setting
and capturing the emergent behavior of human drivers.
| [
{
"version": "v1",
"created": "Fri, 2 Mar 2018 21:18:16 GMT"
}
] | 1,520,294,400,000 | [
[
"Bhattacharyya",
"Raunak P.",
""
],
[
"Phillips",
"Derek J.",
""
],
[
"Wulfe",
"Blake",
""
],
[
"Morton",
"Jeremy",
""
],
[
"Kuefler",
"Alex",
""
],
[
"Kochenderfer",
"Mykel J.",
""
]
] |
1803.01092 | Timo Nolle | Timo Nolle, Stefan Luettgen, Alexander Seeliger, Max M\"uhlh\"auser | Analyzing Business Process Anomalies Using Autoencoders | 20 pages, 5 figures | null | 10.1007/s10994-018-5702-8 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Businesses are naturally interested in detecting anomalies in their internal
processes, because these can be indicators for fraud and inefficiencies. Within
the domain of business intelligence, classic anomaly detection is not very
frequently researched. In this paper, we propose a method, using autoencoders,
for detecting and analyzing anomalies occurring in the execution of a business
process. Our method does not rely on any prior knowledge about the process and
can be trained on a noisy dataset already containing the anomalies. We
demonstrate its effectiveness by evaluating it on 700 different datasets and
testing its performance against three state-of-the-art anomaly detection
methods. This paper is an extension of our previous work from 2016 [30].
Compared to the original publication we have further refined the approach in
terms of performance and conducted an elaborate evaluation on more
sophisticated datasets including real-life event logs from the Business Process
Intelligence Challenges of 2012 and 2017. In our experiments our approach
reached an F1 score of 0.87, whereas the best unaltered state-of-the-art
approach reached an F1 score of 0.72. Furthermore, our approach can be used to
analyze the detected anomalies in terms of which event within one execution of
the process causes the anomaly.
| [
{
"version": "v1",
"created": "Sat, 3 Mar 2018 02:26:28 GMT"
}
] | 1,525,132,800,000 | [
[
"Nolle",
"Timo",
""
],
[
"Luettgen",
"Stefan",
""
],
[
"Seeliger",
"Alexander",
""
],
[
"Mühlhäuser",
"Max",
""
]
] |
1803.01118 | Bradly Stadie | Bradly C. Stadie, Ge Yang, Rein Houthooft, Xi Chen, Yan Duan, Yuhuai
Wu, Pieter Abbeel, Ilya Sutskever | Some Considerations on Learning to Explore via Meta-Reinforcement
Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of exploration in meta reinforcement learning. Two
new meta reinforcement learning algorithms are suggested: E-MAML and
E-$\text{RL}^2$. Results are presented on a novel environment we call `Krazy
World' and a set of maze environments. We show E-MAML and E-$\text{RL}^2$
deliver better performance on tasks where exploration is important.
| [
{
"version": "v1",
"created": "Sat, 3 Mar 2018 07:13:43 GMT"
},
{
"version": "v2",
"created": "Fri, 11 Jan 2019 20:26:59 GMT"
}
] | 1,547,510,400,000 | [
[
"Stadie",
"Bradly C.",
""
],
[
"Yang",
"Ge",
""
],
[
"Houthooft",
"Rein",
""
],
[
"Chen",
"Xi",
""
],
[
"Duan",
"Yan",
""
],
[
"Wu",
"Yuhuai",
""
],
[
"Abbeel",
"Pieter",
""
],
[
"Sutskever",
"Ilya",
""
]
] |
1803.01252 | Corey Kiassat | Nima Safaei, Corey Kiassat | A Swift Heuristic Method for Work Order Scheduling under the
Skilled-Workforce Constraint | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The considered problem is how to optimally allocate a set of jobs to
technicians of different skills such that the number of technicians of each
skill does not exceed the number of persons with that skill designation. The
key motivation is the quick sensitivity analysis in terms of the workforce size
which is quite necessary in many industries in the presence of unexpected work
orders. A time-indexed mathematical model is proposed to minimize the total
weighted completion time of the jobs. The proposed model is decomposed into a
number of single-skill sub-problems so that each one is a combination of a
series of nested binary Knapsack problems. A heuristic procedure is proposed to
solve the problem. Our experimental results, based on a real-world case study,
reveal that the proposed method quickly produces a schedule statistically close
to the optimal one while the classical optimal procedure is very
time-consuming.
| [
{
"version": "v1",
"created": "Sat, 3 Mar 2018 22:19:42 GMT"
}
] | 1,520,294,400,000 | [
[
"Safaei",
"Nima",
""
],
[
"Kiassat",
"Corey",
""
]
] |
1803.01403 | Swen Gaudl | Swen E. Gaudl, Mark J. Nelson, Simon Colton, Rob Saunders, Edward J.
Powley, Peter Ivey, Blanca Perez Ferrer, Michael Cook | Exploring Novel Game Spaces with Fluidic Games | AISB: Games AI & VR, 4 pages, 4 figures, game design, tools,
creativity | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the growing integration of smartphones into our daily lives, and their
increased ease of use, mobile games have become highly popular across all
demographics. People listen to music, play games or read the news while in
transit or bridging gap times. While mobile gaming is gaining popularity,
mobile expression of creativity is still in its early stages. We present here a
new type of mobile app -- fluidic games -- and illustrate our iterative
approach to their design. This new type of app seamlessly integrates
exploration of the design space into the actual user experience of playing the
game, and aims to enrich the user experience. To better illustrate the game
domain and our approach, we discuss one specific fluidic game, which is
available as a commercial product. We also briefly discuss open challenges such
as player support and how generative techniques can aid the exploration of the
game space further.
| [
{
"version": "v1",
"created": "Sun, 4 Mar 2018 18:58:07 GMT"
}
] | 1,520,294,400,000 | [
[
"Gaudl",
"Swen E.",
""
],
[
"Nelson",
"Mark J.",
""
],
[
"Colton",
"Simon",
""
],
[
"Saunders",
"Rob",
""
],
[
"Powley",
"Edward J.",
""
],
[
"Ivey",
"Peter",
""
],
[
"Ferrer",
"Blanca Perez",
""
],
[
"Cook",
"Michael",
""
]
] |
1803.01412 | Mehdi Ghatee Dr. | Shadi Abpeykar and Mehdi Ghatee | A real-time decision support system for bridge management based on the
rules generalized by CART decision tree and SMO algorithms | 11 pages, 5 figures, extracted form an MSc project in Department of
Computer Science, Amirkabir University of Technology, Tehran, Iran This paper
has been accepted for publication in AUT Journal of Mathematics and Computing
(AJMC), http://ajmc.aut.ac.ir/, 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Under dynamic conditions on bridges, we need a real-time management. To this
end, this paper presents a rule-based decision support system in which the
necessary rules are extracted from simulation results made by Aimsun traffic
micro-simulation software. Then, these rules are generalized by the aid of
fuzzy rule generation algorithms. Then, they are trained by a set of supervised
and the unsupervised learning algorithms to get an ability to make decision in
real cases. As a pilot case study, Nasr Bridge in Tehran is simulated in Aimsun
and WEKA data mining software is used to execute the learning algorithms. Based
on this experiment, the accuracy of the supervised algorithms to generalize the
rules is greater than 80%. In addition, CART decision tree and sequential
minimal optimization (SMO) provides 100% accuracy for normal data and these
algorithms are so reliable for crisis management on bridge. This means that, it
is possible to use such machine learning methods to manage bridges in the
real-time conditions.
| [
{
"version": "v1",
"created": "Sun, 4 Mar 2018 20:10:01 GMT"
},
{
"version": "v2",
"created": "Sat, 30 Jun 2018 14:58:08 GMT"
}
] | 1,530,576,000,000 | [
[
"Abpeykar",
"Shadi",
""
],
[
"Ghatee",
"Mehdi",
""
]
] |
1803.01571 | Marc Aiguier | Marc Aiguier and Jamal Atif and Isabelle Bloch and Ram\'on
Pino-P\'erez | Explanatory relations in arbitrary logics based on satisfaction systems,
cutting and retraction | 30 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to introduce a new framework for defining abductive
reasoning operators based on a notion of retraction in arbitrary logics defined
as satisfaction systems. We show how this framework leads to the design of
explanatory relations satisfying properties of abductive reasoning, and discuss
its application to several logics. This extends previous work on propositional
logics where retraction was defined as a morphological erosion. Here weaker
properties are required for retraction, leading to a larger set of suitable
operators for abduction for different logics.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2018 09:32:04 GMT"
}
] | 1,520,294,400,000 | [
[
"Aiguier",
"Marc",
""
],
[
"Atif",
"Jamal",
""
],
[
"Bloch",
"Isabelle",
""
],
[
"Pino-Pérez",
"Ramón",
""
]
] |
1803.01648 | Swen Gaudl | Swen E. Gaudl | A Genetic Programming Framework for 2D Platform AI | Genetic Programming, GP, Game AI, Agent Design, Platformer, AISB,
JGAP, platformerAI, symbolic learning | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There currently exists a wide range of techniques to model and evolve
artificial players for games. Existing techniques range from black box neural
networks to entirely hand-designed solutions. In this paper, we demonstrate the
feasibility of a genetic programming framework using human controller input to
derive meaningful artificial players which can, later on, be optimised by hand.
The current state of the art in game character design relies heavily on human
designers to manually create and edit scripts and rules for game characters. To
address this manual editing bottleneck, current computational intelligence
techniques approach the issue with fully autonomous character generators,
replacing most of the design process using black box solutions such as neural
networks or the like. Our GP approach to this problem creates character
controllers which can be further authored and developed by a designer it also
offers designers to included their play style without the need to use a
programming language. This keeps the designer in the loop while reducing
repetitive manual labour. Our system also provides insights into how players
express themselves in games and into deriving appropriate models for
representing those insights. We present our framework, supporting findings and
open challenges.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2018 13:11:22 GMT"
}
] | 1,520,294,400,000 | [
[
"Gaudl",
"Swen E.",
""
]
] |
1803.01690 | Kieran Greer Dr | Kieran Greer | New Ideas for Brain Modelling 5 | null | AIMS Biophysics, Vol. 8, Issue 1, pp. 41-56, 2021 | 10.3934/biophy.2021003 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes a process for combining patterns and features, to guide
a search process and make predictions. It is based on the functionality that a
human brain might have, which is a highly distributed network of simple
neuronal components that can apply some level of matching and cross-referencing
over retrieved patterns. The process uses memory in a dynamic way and it is
directed through the pattern matching. The paper firstly describes the
mechanisms for neuronal search, memory and prediction. The paper then presents
a formal language for defining cognitive processes, that is, pattern-based
sequences and transitions. The language can define an outer framework for
concept sets that are linked to perform the cognitive act. The language also
has a mathematical basis, allowing for the rule construction to be consistent.
Now, both static memory and dynamic process hierarchies can be built as tree
structures. The new information can also be used to further integrate the
cognitive model and the ensemble-hierarchy structure becomes an essential part.
A theory about linking can suggest that nodes in different regions link
together when generally they represent the same thing.
| [
{
"version": "v1",
"created": "Mon, 5 Mar 2018 14:46:19 GMT"
},
{
"version": "v10",
"created": "Sun, 29 Dec 2019 16:15:17 GMT"
},
{
"version": "v11",
"created": "Mon, 5 Oct 2020 08:13:25 GMT"
},
{
"version": "v2",
"created": "Mon, 23 Jul 2018 11:29:42 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Dec 2018 19:40:20 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Jan 2019 21:11:32 GMT"
},
{
"version": "v5",
"created": "Sun, 30 Jun 2019 11:27:35 GMT"
},
{
"version": "v6",
"created": "Sat, 3 Aug 2019 15:43:20 GMT"
},
{
"version": "v7",
"created": "Fri, 9 Aug 2019 17:51:41 GMT"
},
{
"version": "v8",
"created": "Tue, 5 Nov 2019 15:58:38 GMT"
},
{
"version": "v9",
"created": "Wed, 6 Nov 2019 09:53:23 GMT"
}
] | 1,609,804,800,000 | [
[
"Greer",
"Kieran",
""
]
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.