id
stringlengths 9
10
| submitter
stringlengths 5
47
⌀ | authors
stringlengths 5
1.72k
| title
stringlengths 11
234
| comments
stringlengths 1
491
⌀ | journal-ref
stringlengths 4
396
⌀ | doi
stringlengths 13
97
⌀ | report-no
stringlengths 4
138
⌀ | categories
stringclasses 1
value | license
stringclasses 9
values | abstract
stringlengths 29
3.66k
| versions
listlengths 1
21
| update_date
int64 1,180B
1,718B
| authors_parsed
listlengths 1
98
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1805.03720 | Matthew Guzdial | Matthew Guzdial, Nicholas Liao, Vishwa Shah, and Mark O. Riedl | Creative Invention Benchmark | 8 pages, 4 figures, International Conference on Computational
Creativity | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present the Creative Invention Benchmark (CrIB), a
2000-problem benchmark for evaluating a particular facet of computational
creativity. Specifically, we address combinational p-creativity, the creativity
at play when someone combines existing knowledge to achieve a solution novel to
that individual. We present generation strategies for the five problem
categories of the benchmark and a set of initial baselines.
| [
{
"version": "v1",
"created": "Wed, 9 May 2018 20:20:41 GMT"
}
]
| 1,525,996,800,000 | [
[
"Guzdial",
"Matthew",
""
],
[
"Liao",
"Nicholas",
""
],
[
"Shah",
"Vishwa",
""
],
[
"Riedl",
"Mark O.",
""
]
]
|
1805.03876 | Wei Xia | Wei Xia and Roland H. C. Yap | Learning Robust Search Strategies Using a Bandit-Based Approach | Published at the Proceedings of 32th AAAI Conference on Artificial
Intelligence (AAAI'18) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Effective solving of constraint problems often requires choosing good or
specific search heuristics. However, choosing or designing a good search
heuristic is non-trivial and is often a manual process. In this paper, rather
than manually choosing/designing search heuristics, we propose the use of
bandit-based learning techniques to automatically select search heuristics. Our
approach is online where the solver learns and selects from a set of heuristics
during search. The goal is to obtain automatic search heuristics which give
robust performance. Preliminary experiments show that our adaptive technique is
more robust than the original search heuristics. It can also outperform the
original heuristics.
| [
{
"version": "v1",
"created": "Thu, 10 May 2018 08:30:37 GMT"
}
]
| 1,525,996,800,000 | [
[
"Xia",
"Wei",
""
],
[
"Yap",
"Roland H. C.",
""
]
]
|
1805.04253 | Lisa Andreevna Chalaguine | Lisa A. Chalaguine, Anthony Hunter, Henry W. W. Potts, Fiona L.
Hamilton | Argument Harvesting Using Chatbots | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much research in computational argumentation assumes that arguments and
counterarguments can be obtained in some way. Yet, to improve and apply models
of argument, we need methods for acquiring them. Current approaches include
argument mining from text, hand coding of arguments by researchers, or
generating arguments from knowledge bases. In this paper, we propose a new
approach, which we call argument harvesting, that uses a chatbot to enter into
a dialogue with a participant to get arguments and counterarguments from him or
her. Because it is automated, the chatbot can be used repeatedly in many
dialogues, and thereby it can generate a large corpus. We describe the
architecture of the chatbot, provide methods for managing a corpus of arguments
and counterarguments, and an evaluation of our approach in a case study
concerning attitudes of women to participation in sport.
| [
{
"version": "v1",
"created": "Fri, 11 May 2018 06:55:32 GMT"
}
]
| 1,526,256,000,000 | [
[
"Chalaguine",
"Lisa A.",
""
],
[
"Hunter",
"Anthony",
""
],
[
"Potts",
"Henry W. W.",
""
],
[
"Hamilton",
"Fiona L.",
""
]
]
|
1805.04419 | Tuyen Le Pham | Le Pham Tuyen, Ngo Anh Vien, Abu Layek, TaeChoong Chung | Deep Hierarchical Reinforcement Learning Algorithm in Partially
Observable Markov Decision Processes | This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessible | null | 10.1109/ACCESS.2018.2854283 | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | In recent years, reinforcement learning has achieved many remarkable
successes due to the growing adoption of deep learning techniques and the rapid
growth in computing power. Nevertheless, it is well-known that flat
reinforcement learning algorithms are often not able to learn well and
data-efficient in tasks having hierarchical structures, e.g. consisting of
multiple subtasks. Hierarchical reinforcement learning is a principled approach
that is able to tackle these challenging tasks. On the other hand, many
real-world tasks usually have only partial observability in which state
measurements are often imperfect and partially observable. The problems of RL
in such settings can be formulated as a partially observable Markov decision
process (POMDP). In this paper, we study hierarchical RL in POMDP in which the
tasks have only partial observability and possess hierarchical properties. We
propose a hierarchical deep reinforcement learning approach for learning in
hierarchical POMDP. The deep hierarchical RL algorithm is proposed to apply to
both MDP and POMDP learning. We evaluate the proposed algorithm on various
challenging hierarchical POMDP.
| [
{
"version": "v1",
"created": "Fri, 11 May 2018 14:30:21 GMT"
}
]
| 1,537,488,000,000 | [
[
"Tuyen",
"Le Pham",
""
],
[
"Vien",
"Ngo Anh",
""
],
[
"Layek",
"Abu",
""
],
[
"Chung",
"TaeChoong",
""
]
]
|
1805.04493 | Zhaodong Wang | Zhaodong Wang, Matthew E. Taylor | Interactive Reinforcement Learning with Dynamic Reuse of Prior Knowledge
from Human/Agent's Demonstration | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning has enjoyed multiple successes in recent years.
However, these successes typically require very large amounts of data before an
agent achieves acceptable performance. This paper introduces a novel way of
combating such requirements by leveraging existing (human or agent) knowledge.
In particular, this paper uses demonstrations from agents and humans, allowing
an untrained agent to quickly achieve high performance. We empirically compare
with, and highlight the weakness of, HAT and CHAT, methods of transferring
knowledge from a source agent/human to a target agent. This paper introduces an
effective transfer approach, DRoP, combining the offline knowledge
(demonstrations recorded before learning) with online confidence-based
performance analysis. DRoP dynamically involves the demonstrator's knowledge,
integrating it into the reinforcement learning agent's online learning loop to
achieve efficient and robust learning.
| [
{
"version": "v1",
"created": "Fri, 11 May 2018 17:12:11 GMT"
}
]
| 1,526,256,000,000 | [
[
"Wang",
"Zhaodong",
""
],
[
"Taylor",
"Matthew E.",
""
]
]
|
1805.04749 | Juan Cruz Barsce | Juan Cruz Barsce, Jorge A. Palombarini, Ernesto C. Mart\'inez | A Cognitive Approach to Real-time Rescheduling using SOAR-RL | Conference paper presented in the Argentinian Congress of Computer
Science 2013 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ensuring flexible and efficient manufacturing of customized products in an
increasing dynamic and turbulent environment without sacrificing cost
effectiveness, product quality and on-time delivery has become a key issue for
most industrial enterprises. A promising approach to cope with this challenge
is the integration of cognitive capabilities in systems and processes with the
aim of expanding the knowledge base used to perform managerial and operational
tasks. In this work, a novel approach to real-time rescheduling is proposed in
order to achieve sustainable improvements in flexibility and adaptability of
production systems through the integration of artificial cognitive
capabilities, involving perception, reasoning/learning and planning skills.
Moreover, an industrial example is discussed where the SOAR cognitive
architecture capabilities are integrated in a software prototype, showing that
the approach enables the rescheduling system to respond to events in an
autonomic way, and to acquire experience through intensive simulation while
performing repair tasks.
| [
{
"version": "v1",
"created": "Sat, 12 May 2018 16:53:53 GMT"
}
]
| 1,526,342,400,000 | [
[
"Barsce",
"Juan Cruz",
""
],
[
"Palombarini",
"Jorge A.",
""
],
[
"Martínez",
"Ernesto C.",
""
]
]
|
1805.04752 | Juan Cruz Barsce | Jorge A. Palombarini, Juan Cruz Barsce, Ernesto C. Mart\'inez | Generating Rescheduling Knowledge using Reinforcement Learning in a
Cognitive Architecture | Conference paper presented in the Jornadas Argentinas de
Inform\'atica (JAIIO) 2014. arXiv admin note: text overlap with
arXiv:1805.04749 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to reach higher degrees of flexibility, adaptability and autonomy in
manufacturing systems, it is essential to develop new rescheduling
methodologies which resort to cognitive capabilities, similar to those found in
human beings. Artificial cognition is important for designing planning and
control systems that generate and represent knowledge about heuristics for
repair-based scheduling. Rescheduling knowledge in the form of decision rules
is used to deal with unforeseen events and disturbances reactively in real
time, and take advantage of the ability to act interactively with the user to
counteract the effects of disruptions. In this work, to achieve the
aforementioned goals, a novel approach to generate rescheduling knowledge in
the form of dynamic first-order logical rules is proposed. The proposed
approach is based on the integration of reinforcement learning with artificial
cognitive capabilities involving perception and reasoning/learning skills
embedded in the Soar cognitive architecture. An industrial example is discussed
showing that the approach enables the scheduling system to assess its
operational range in an autonomic way, and to acquire experience through
intensive simulation while performing repair tasks.
| [
{
"version": "v1",
"created": "Sat, 12 May 2018 17:05:56 GMT"
}
]
| 1,527,465,600,000 | [
[
"Palombarini",
"Jorge A.",
""
],
[
"Barsce",
"Juan Cruz",
""
],
[
"Martínez",
"Ernesto C.",
""
]
]
|
1805.05250 | Juliao Braga | Juliao Braga and Joao Nuno Silva and Patricia Takako Endo and Jessica
Ribas and Nizam Omar | Blockchain to Improve Security, Knowledge and Collaboration Inter-Agent
Communication over Restrict Domains of the Internet Infrastructure | 13 pages, CSBC 2018, V Workshop pre IETF, July 2018 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper describes the deployment and implementation of a blockchain to
improve the security, knowledge, intelligence and collaboration during the
inter-agent communication processes in restrict domains of the Internet
Infrastructure. It is a work that proposes the application of a blockchain,
platform independent, on a particular model of agents, but that can be used in
similar proposals, once the results on the specific model were satisfactory.
| [
{
"version": "v1",
"created": "Mon, 14 May 2018 15:57:03 GMT"
},
{
"version": "v2",
"created": "Sat, 19 May 2018 09:34:32 GMT"
},
{
"version": "v3",
"created": "Wed, 8 Aug 2018 09:39:02 GMT"
},
{
"version": "v4",
"created": "Thu, 9 Aug 2018 09:50:45 GMT"
}
]
| 1,533,859,200,000 | [
[
"Braga",
"Juliao",
""
],
[
"Silva",
"Joao Nuno",
""
],
[
"Endo",
"Patricia Takako",
""
],
[
"Ribas",
"Jessica",
""
],
[
"Omar",
"Nizam",
""
]
]
|
1805.05445 | Markus Hecher | Johannes K. Fichte, Michael Morak, Markus Hecher, Stefan Woltran | Exploiting Treewidth for Projected Model Counting and its Limits | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a novel algorithm to solve projected model
counting (PMC). PMC asks to count solutions of a Boolean formula with respect
to a given set of projected variables, where multiple solutions that are
identical when restricted to the projected variables count as only one
solution. Our algorithm exploits small treewidth of the primal graph of the
input instance. It runs in time $O({2^{2^{k+4}} n^2})$ where k is the treewidth
and n is the input size of the instance. In other words, we obtain that the
problem PMC is fixed-parameter tractable when parameterized by treewidth.
Further, we take the exponential time hypothesis (ETH) into consideration and
establish lower bounds of bounded treewidth algorithms for PMC, yielding
asymptotically tight runtime bounds of our algorithm.
| [
{
"version": "v1",
"created": "Mon, 14 May 2018 21:02:28 GMT"
}
]
| 1,526,428,800,000 | [
[
"Fichte",
"Johannes K.",
""
],
[
"Morak",
"Michael",
""
],
[
"Hecher",
"Markus",
""
],
[
"Woltran",
"Stefan",
""
]
]
|
1805.05447 | Maartje Ter Hoeve | Maartje ter Hoeve, Anne Schuth, Daan Odijk, Maarten de Rijke | Faithfully Explaining Rankings in a News Recommender System | 9 pages, 3 tables, 3 figures, 4 algorithms | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an increasing demand for algorithms to explain their outcomes. So
far, there is no method that explains the rankings produced by a ranking
algorithm. To address this gap we propose LISTEN, a LISTwise ExplaiNer, to
explain rankings produced by a ranking algorithm. To efficiently use LISTEN in
production, we train a neural network to learn the underlying explanation space
created by LISTEN; we call this model Q-LISTEN. We show that LISTEN produces
faithful explanations and that Q-LISTEN is able to learn these explanations.
Moreover, we show that LISTEN is safe to use in a real world environment: users
of a news recommendation system do not behave significantly differently when
they are exposed to explanations generated by LISTEN instead of manually
generated explanations.
| [
{
"version": "v1",
"created": "Mon, 14 May 2018 21:13:04 GMT"
}
]
| 1,526,428,800,000 | [
[
"ter Hoeve",
"Maartje",
""
],
[
"Schuth",
"Anne",
""
],
[
"Odijk",
"Daan",
""
],
[
"de Rijke",
"Maarten",
""
]
]
|
1805.05714 | Tom Hanika | Tom Hanika and Friedrich Martin Schneider and Gerd Stumme | Intrinsic dimension and its application to association rules | 4 pages, 1 figure | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The curse of dimensionality in the realm of association rules is twofold.
Firstly, we have the well known exponential increase in computational
complexity with increasing item set size. Secondly, there is a \emph{related
curse} concerned with the distribution of (spare) data itself in high
dimension. The former problem is often coped with by projection, i.e., feature
selection, whereas the best known strategy for the latter is avoidance. This
work summarizes the first attempt to provide a computationally feasible method
for measuring the extent of dimension curse present in a data set with respect
to a particular class machine of learning procedures. This recent development
enables the application of various other methods from geometric analysis to be
investigated and applied in machine learning procedures in the presence of high
dimension.
| [
{
"version": "v1",
"created": "Tue, 15 May 2018 11:40:50 GMT"
}
]
| 1,526,428,800,000 | [
[
"Hanika",
"Tom",
""
],
[
"Schneider",
"Friedrich Martin",
""
],
[
"Stumme",
"Gerd",
""
]
]
|
1805.06020 | Tambet Matiisen | Tambet Matiisen, Aqeel Labash, Daniel Majoral, Jaan Aru, Raul Vicente | Do deep reinforcement learning agents model intentions? | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring other agents' mental states such as their knowledge, beliefs and
intentions is thought to be essential for effective interactions with other
agents. Recently, multiagent systems trained via deep reinforcement learning
have been shown to succeed in solving different tasks, but it remains unclear
how each agent modeled or represented other agents in their environment. In
this work we test whether deep reinforcement learning agents explicitly
represent other agents' intentions (their specific aims or goals) during a task
in which the agents had to coordinate the covering of different spots in a 2D
environment. In particular, we tracked over time the performance of a linear
decoder trained to predict the final goal of all agents from the hidden state
of each agent's neural network controller. We observed that the hidden layers
of agents represented explicit information about other agents' goals, i.e. the
target landmark they ended up covering. We also performed a series of
experiments, in which some agents were replaced by others with fixed goals, to
test the level of generalization of the trained agents. We noticed that during
the training phase the agents developed a differential preference for each
goal, which hindered generalization. To alleviate the above problem, we propose
simple changes to the MADDPG training algorithm which leads to better
generalization against unseen agents. We believe that training protocols
promoting more active intention reading mechanisms, e.g. by preventing simple
symmetry-breaking solutions, is a promising direction towards achieving a more
robust generalization in different cooperative and competitive tasks.
| [
{
"version": "v1",
"created": "Tue, 15 May 2018 20:15:05 GMT"
},
{
"version": "v2",
"created": "Mon, 21 May 2018 14:42:57 GMT"
}
]
| 1,526,947,200,000 | [
[
"Matiisen",
"Tambet",
""
],
[
"Labash",
"Aqeel",
""
],
[
"Majoral",
"Daniel",
""
],
[
"Aru",
"Jaan",
""
],
[
"Vicente",
"Raul",
""
]
]
|
1805.06248 | Ryo Nakahashi | Ryo Nakahashi, Seiji Yamada | Modeling Human Inference of Others' Intentions in Complex Situations
with Plan Predictability Bias | Accepted at Cogsci 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent approach based on Bayesian inverse planning for the "theory of mind"
has shown good performance in modeling human cognition. However, perfect
inverse planning differs from human cognition during one kind of complex tasks
due to human bounded rationality. One example is an environment in which there
are many available plans for achieving a specific goal. We propose a "plan
predictability oriented model" as a model of inferring other peoples' goals in
complex environments. This model adds the bias that people prefer predictable
plans. This bias is calculated with simple plan prediction. We tested this
model with a behavioral experiment in which humans observed the partial path of
goal-directed actions. Our model had a higher correlation with human inference.
We also confirmed the robustness of our model with complex tasks and determined
that it can be improved by taking account of individual differences in "bounded
rationality".
| [
{
"version": "v1",
"created": "Wed, 16 May 2018 11:30:56 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Sep 2018 09:40:53 GMT"
},
{
"version": "v3",
"created": "Wed, 20 Nov 2019 11:22:47 GMT"
}
]
| 1,574,294,400,000 | [
[
"Nakahashi",
"Ryo",
""
],
[
"Yamada",
"Seiji",
""
]
]
|
1805.06368 | Victor G. Turrisi Costa | Victor Guilherme Turrisi da Costa, Andr\'e Carlos Ponce de Leon
Ferreira de Carvalho, Sylvio Barbon Junior | Strict Very Fast Decision Tree: a memory conservative algorithm for data
stream mining | 7 pages, 26 figures, Under R1 revision in Pattern Recognition Letters | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dealing with memory and time constraints are current challenges when learning
from data streams with a massive amount of data. Many algorithms have been
proposed to handle these difficulties, among them, the Very Fast Decision Tree
(VFDT) algorithm. Although the VFDT has been widely used in data stream mining,
in the last years, several authors have suggested modifications to increase its
performance, putting aside memory concerns by proposing memory-costly
solutions. Besides, most data stream mining solutions have been centred around
ensembles, which combine the memory costs of their weak learners, usually
VFDTs. To reduce the memory cost, keeping the predictive performance, this
study proposes the Strict VFDT (SVFDT), a novel algorithm based on the VFDT.
The SVFDT algorithm minimises unnecessary tree growth, substantially reducing
memory usage and keeping competitive predictive performance. Moreover, since it
creates much more shallow trees than VFDT, SVFDT can achieve a shorter
processing time. Experiments were carried out comparing the SVFDT with the VFDT
in 11 benchmark data stream datasets. This comparison assessed the trade-off
between accuracy, memory, and processing time. Statistical analysis showed that
the proposed algorithm obtained similar predictive performance and
significantly reduced processing time and memory use. Thus, SVFDT is a suitable
option for data stream mining with memory and time limitations, recommended as
a weak learner in ensemble-based solutions.
| [
{
"version": "v1",
"created": "Wed, 16 May 2018 15:28:39 GMT"
},
{
"version": "v2",
"created": "Thu, 17 May 2018 13:57:51 GMT"
}
]
| 1,526,601,600,000 | [
[
"da Costa",
"Victor Guilherme Turrisi",
""
],
[
"de Carvalho",
"André Carlos Ponce de Leon Ferreira",
""
],
[
"Junior",
"Sylvio Barbon",
""
]
]
|
1805.06610 | Wenyi Wang Mr. | Wenyi Wang | A Formulation of Recursive Self-Improvement and Its Possible Efficiency | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recursive self-improving (RSI) systems have been dreamed of since the early
days of computer science and artificial intelligence. However, many existing
studies on RSI systems remain philosophical, and lacks clear formulation and
results. In this paper, we provide a formal definition for one class of RSI
systems, and then demonstrate the existence of computable and efficient RSI
systems on a restricted version. We use simulation to empirically show that we
achieve logarithmic runtime complexity with respect to the size of the search
space, and these results suggest it is possible to achieve an efficient
recursive self-improvement.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 06:08:37 GMT"
}
]
| 1,526,601,600,000 | [
[
"Wang",
"Wenyi",
""
]
]
|
1805.06664 | Sarmimala Saikia | S Saikia, R Verma, P Agarwal, G Shroff, L Vig and A Srinivasan | Evolutionary RL for Container Loading | 6 Pages, 2 figures, accepted in ESANN 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Loading the containers on the ship from a yard, is an impor- tant part of
port operations. Finding the optimal sequence for the loading of containers, is
known to be computationally hard and is an example of combinatorial
optimization, which leads to the application of simple heuristics in practice.
In this paper, we propose an approach which uses a mix of Evolutionary
Strategies and Reinforcement Learning (RL) tech- niques to find an
approximation of the optimal solution. The RL based agent uses the Policy
Gradient method, an evolutionary reward strategy and a Pool of good
(not-optimal) solutions to find the approximation. We find that the RL agent
learns near-optimal solutions that outperforms the heuristic solutions. We also
observe that the RL agent assisted with a pool generalizes better for unseen
problems than an RL agent without a pool. We present our results on synthetic
data as well as on subsets of real-world problems taken from container
terminal. The results validate that our approach does comparatively better than
the heuristics solutions available, and adapts to unseen problems better.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 09:19:35 GMT"
}
]
| 1,526,601,600,000 | [
[
"Saikia",
"S",
""
],
[
"Verma",
"R",
""
],
[
"Agarwal",
"P",
""
],
[
"Shroff",
"G",
""
],
[
"Vig",
"L",
""
],
[
"Srinivasan",
"A",
""
]
]
|
1805.06824 | Akshat Agarwal | Akshat Agarwal, Ryan Hope and Katia Sycara | Learning Time-Sensitive Strategies in Space Fortress | 10 pages, 3 figures. Withdrawn, superseded by arXiv:1809.02206 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there has been remarkable progress and impressive performance on
reinforcement learning (RL) on Atari games, there are many problems with
challenging characteristics that have not yet been explored in Deep Learning
for RL. These include reward sparsity, abrupt context-dependent reversals of
strategy and time-sensitive game play. In this paper, we present Space
Fortress, a game that incorporates all these characteristics and experimentally
show that the presence of any of these renders state of the art Deep RL
algorithms incapable of learning. Then, we present our enhancements to an
existing algorithm and show big performance increases through each enhancement
through an ablation study. We discuss how each of these enhancements was able
to help and also argue that appropriate transfer learning boosts performance.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 15:36:42 GMT"
},
{
"version": "v2",
"created": "Wed, 30 May 2018 22:57:38 GMT"
},
{
"version": "v3",
"created": "Sun, 24 Jun 2018 18:12:34 GMT"
},
{
"version": "v4",
"created": "Thu, 13 Sep 2018 22:08:17 GMT"
}
]
| 1,537,142,400,000 | [
[
"Agarwal",
"Akshat",
""
],
[
"Hope",
"Ryan",
""
],
[
"Sycara",
"Katia",
""
]
]
|
1805.06861 | Carl Schultz | Carl Schultz, Mehul Bhatt, Jakob Suchan, Przemys{\l}aw Wa{\l}\k{e}ga | Answer Set Programming Modulo `Space-Time' | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ASP Modulo `Space-Time', a declarative representational and
computational framework to perform commonsense reasoning about regions with
both spatial and temporal components. Supported are capabilities for mixed
qualitative-quantitative reasoning, consistency checking, and inferring
compositions of space-time relations; these capabilities combine and synergise
for applications in a range of AI application areas where the processing and
interpretation of spatio-temporal data is crucial. The framework and resulting
system is the only general KR-based method for declaratively reasoning about
the dynamics of `space-time' regions as first-class objects. We present an
empirical evaluation (with scalability and robustness results), and include
diverse application examples involving interpretation and control tasks.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 17:05:30 GMT"
}
]
| 1,526,601,600,000 | [
[
"Schultz",
"Carl",
""
],
[
"Bhatt",
"Mehul",
""
],
[
"Suchan",
"Jakob",
""
],
[
"Wałęga",
"Przemysław",
""
]
]
|
1805.06924 | Sergio Romano | Pablo Tano, Sergio Romano, Mariano Sigman, Alejo Salles and Santiago
Figueira | Towards a more flexible Language of Thought: Bayesian grammar updates
after each concept exposure | null | Phys. Rev. E 101, 042128 (2020) | 10.1103/PhysRevE.101.042128 | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent approaches to human concept learning have successfully combined the
power of symbolic, infinitely productive rule systems and statistical learning
to explain our ability to learn new concepts from just a few examples. The aim
of most of these studies is to reveal the underlying language structuring these
representations and providing a general substrate for thought. However,
describing a model of thought that is fixed once trained is against the
extensive literature that shows how experience shapes concept learning. Here,
we ask about the plasticity of these symbolic descriptive languages. We perform
a concept learning experiment that demonstrates that humans can change very
rapidly the repertoire of symbols they use to identify concepts, by compiling
expressions which are frequently used into new symbols of the language. The
pattern of concept learning times is accurately described by a Bayesian agent
that rationally updates the probability of compiling a new expression according
to how useful it has been to compress concepts so far. By portraying the
Language of Thought as a flexible system of rules, we also highlight the
difficulties to pin it down empirically.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 18:56:09 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Sep 2019 20:15:54 GMT"
}
]
| 1,588,118,400,000 | [
[
"Tano",
"Pablo",
""
],
[
"Romano",
"Sergio",
""
],
[
"Sigman",
"Mariano",
""
],
[
"Salles",
"Alejo",
""
],
[
"Figueira",
"Santiago",
""
]
]
|
1805.06992 | Jun Wang | Jun Wang, Sujoy Sikdar, Tyler Shepherd, Zhibing Zhao, Chunheng Jiang,
Lirong Xia | Practical Algorithms for STV and Ranked Pairs with Parallel Universes
Tiebreaking | 15 pages, 12 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | STV and ranked pairs (RP) are two well-studied voting rules for group
decision-making. They proceed in multiple rounds, and are affected by how ties
are broken in each round. However, the literature is surprisingly vague about
how ties should be broken. We propose the first algorithms for computing the
set of alternatives that are winners under some tiebreaking mechanism under STV
and RP, which is also known as parallel-universes tiebreaking (PUT).
Unfortunately, PUT-winners are NP-complete to compute under STV and RP, and
standard search algorithms from AI do not apply. We propose multiple DFS-based
algorithms along with pruning strategies and heuristics to prioritize search
direction to significantly improve the performance using machine learning. We
also propose novel ILP formulations for PUT-winners under STV and RP,
respectively. Experiments on synthetic and real-world data show that our
algorithms are overall significantly faster than ILP, while there are a few
cases where ILP is significantly faster for RP.
| [
{
"version": "v1",
"created": "Thu, 17 May 2018 23:20:57 GMT"
}
]
| 1,526,860,800,000 | [
[
"Wang",
"Jun",
""
],
[
"Sikdar",
"Sujoy",
""
],
[
"Shepherd",
"Tyler",
""
],
[
"Zhao",
"Zhibing",
""
],
[
"Jiang",
"Chunheng",
""
],
[
"Xia",
"Lirong",
""
]
]
|
1805.07008 | Marc Brittain | Marc Brittain and Peng Wei | Hierarchical Reinforcement Learning with Deep Nested Agents | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep hierarchical reinforcement learning has gained a lot of attention in
recent years due to its ability to produce state-of-the-art results in
challenging environments where non-hierarchical frameworks fail to learn useful
policies. However, as problem domains become more complex, deep hierarchical
reinforcement learning can become inefficient, leading to longer convergence
times and poor performance. We introduce the Deep Nested Agent framework, which
is a variant of deep hierarchical reinforcement learning where information from
the main agent is propagated to the low level $nested$ agent by incorporating
this information into the nested agent's state. We demonstrate the
effectiveness and performance of the Deep Nested Agent framework by applying it
to three scenarios in Minecraft with comparisons to a deep non-hierarchical
single agent framework, as well as, a deep hierarchical framework.
| [
{
"version": "v1",
"created": "Fri, 18 May 2018 01:06:36 GMT"
}
]
| 1,526,860,800,000 | [
[
"Brittain",
"Marc",
""
],
[
"Wei",
"Peng",
""
]
]
|
1805.07069 | Mahdi Shaghaghi | Mahdi Shaghaghi, Raviraj S. Adve, Zhen Ding | Multifunction Cognitive Radar Task Scheduling Using Monte Carlo Tree
Search and Policy Networks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A modern radar may be designed to perform multiple functions, such as
surveillance, tracking, and fire control. Each function requires the radar to
execute a number of transmit-receive tasks. A radar resource management (RRM)
module makes decisions on parameter selection, prioritization, and scheduling
of such tasks. RRM becomes especially challenging in overload situations, where
some tasks may need to be delayed or even dropped. In general, task scheduling
is an NP-hard problem. In this work, we develop the branch-and-bound (B&B)
method which obtains the optimal solution but at exponential computational
complexity. On the other hand, heuristic methods have low complexity but
provide relatively poor performance. We resort to machine learning-based
techniques to address this issue; specifically we propose an approximate
algorithm based on the Monte Carlo tree search method. Along with using bound
and dominance rules to eliminate nodes from the search tree, we use a policy
network to help to reduce the width of the search. Such a network can be
trained using solutions obtained by running the B&B method offline on problems
with feasible complexity. We show that the proposed method provides
near-optimal performance, but with computational complexity orders of magnitude
smaller than the B&B algorithm.
| [
{
"version": "v1",
"created": "Fri, 18 May 2018 06:58:16 GMT"
}
]
| 1,526,860,800,000 | [
[
"Shaghaghi",
"Mahdi",
""
],
[
"Adve",
"Raviraj S.",
""
],
[
"Ding",
"Zhen",
""
]
]
|
1805.07180 | Yong Lai | Yong Lai | Approximate Model Counting by Partial Knowledge Compilation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model counting is the problem of computing the number of satisfying
assignments of a given propositional formula. Although exact model counters can
be naturally furnished by most of the knowledge compilation (KC) methods, in
practice, they fail to generate the compiled results for the exact counting of
models for certain formulas due to the explosion in sizes. Decision-DNNF is an
important KC language that captures most of the practical compilers. We propose
a generalized Decision-DNNF (referred to as partial Decision-DNNF) via
introducing a class of new leaf vertices (called unknown vertices), and then
propose an algorithm called PartialKC to generate randomly partial
Decision-DNNF formulas from the given formulas. An unbiased estimate of the
model number can be computed via a randomly partial Decision-DNNF formula. Each
calling of PartialKC consists of multiple callings of MicroKC, while each of
the latter callings is a process of importance sampling equipped with KC
technologies. The experimental results show that PartialKC is more accurate
than both SampleSearch and SearchTreeSampler, PartialKC scales better than
SearchTreeSampler, and the KC technologies can obviously accelerate sampling.
| [
{
"version": "v1",
"created": "Fri, 18 May 2018 12:51:48 GMT"
}
]
| 1,526,860,800,000 | [
[
"Lai",
"Yong",
""
]
]
|
1805.07470 | Stephen McAleer | Stephen McAleer, Forest Agostinelli, Alexander Shmakov, Pierre Baldi | Solving the Rubik's Cube Without Human Knowledge | First three authors contributed equally. Submitted to NIPS 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A generally intelligent agent must be able to teach itself how to solve
problems in complex domains with minimal human supervision. Recently, deep
reinforcement learning algorithms combined with self-play have achieved
superhuman proficiency in Go, Chess, and Shogi without human data or domain
knowledge. In these environments, a reward is always received at the end of the
game, however, for many combinatorial optimization environments, rewards are
sparse and episodes are not guaranteed to terminate. We introduce Autodidactic
Iteration: a novel reinforcement learning algorithm that is able to teach
itself how to solve the Rubik's Cube with no human assistance. Our algorithm is
able to solve 100% of randomly scrambled cubes while achieving a median solve
length of 30 moves -- less than or equal to solvers that employ human domain
knowledge.
| [
{
"version": "v1",
"created": "Fri, 18 May 2018 23:07:31 GMT"
}
]
| 1,526,947,200,000 | [
[
"McAleer",
"Stephen",
""
],
[
"Agostinelli",
"Forest",
""
],
[
"Shmakov",
"Alexander",
""
],
[
"Baldi",
"Pierre",
""
]
]
|
1805.07547 | Emilio Cartoni | Emilio Cartoni, Gianluca Baldassarre | Autonomous discovery of the goal space to learn a parameterized skill | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A parameterized skill is a mapping from multiple goals/task parameters to the
policy parameters to accomplish them. Existing works in the literature show how
a parameterized skill can be learned given a task space that defines all the
possible achievable goals. In this work, we focus on tasks defined in terms of
final states (goals), and we face on the challenge where the agent aims to
autonomously acquire a parameterized skill to manipulate an initially unknown
environment. In this case, the task space is not known a priori and the agent
has to autonomously discover it. The agent may posit as a task space its whole
sensory space (i.e. the space of all possible sensor readings) as the
achievable goals will certainly be a subset of this space. However, the space
of achievable goals may be a very tiny subspace in relation to the whole
sensory space, thus directly using the sensor space as task space exposes the
agent to the curse of dimensionality and makes existing autonomous skill
acquisition algorithms inefficient. In this work we present an algorithm that
actively discovers the manifold of the achievable goals within the sensor
space. We validate the algorithm by employing it in multiple different
simulated scenarios where the agent actions achieve different types of goals:
moving a redundant arm, pushing an object, and changing the color of an object.
| [
{
"version": "v1",
"created": "Sat, 19 May 2018 08:18:39 GMT"
}
]
| 1,526,947,200,000 | [
[
"Cartoni",
"Emilio",
""
],
[
"Baldassarre",
"Gianluca",
""
]
]
|
1805.07715 | Andri Ashfahani | Andri Ashfahani, Mahardhika Pratama, Edwin Lughofer, Qing Cai, and
Huang Sheng | An Online RFID Localization in the Manufacturing Shopfloor | null | Predictive Maintenance in Dynamic Systems: Advanced Methods,
Decision Support Tools and Real-World Applications 2019 | 10.1007/978-3-030-05645-2_10 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | {Radio Frequency Identification technology has gained popularity for cheap
and easy deployment. In the realm of manufacturing shopfloor, it can be used to
track the location of manufacturing objects to achieve better efficiency. The
underlying challenge of localization lies in the non-stationary characteristics
of manufacturing shopfloor which calls for an adaptive life-long learning
strategy in order to arrive at accurate localization results. This paper
presents an evolving model based on a novel evolving intelligent system, namely
evolving Type-2 Quantum Fuzzy Neural Network (eT2QFNN), which features an
interval type-2 quantum fuzzy set with uncertain jump positions. The quantum
fuzzy set possesses a graded membership degree which enables better
identification of overlaps between classes. The eT2QFNN works fully in the
evolving mode where all parameters including the number of rules are
automatically adjusted and generated on the fly. The parameter adjustment
scenario relies on decoupled extended Kalman filter method. Our numerical study
shows that eT2QFNN is able to deliver comparable accuracy compared to
state-of-the-art algorithms.
| [
{
"version": "v1",
"created": "Sun, 20 May 2018 06:27:53 GMT"
},
{
"version": "v2",
"created": "Wed, 29 May 2019 18:56:04 GMT"
}
]
| 1,559,260,800,000 | [
[
"Ashfahani",
"Andri",
""
],
[
"Pratama",
"Mahardhika",
""
],
[
"Lughofer",
"Edwin",
""
],
[
"Cai",
"Qing",
""
],
[
"Sheng",
"Huang",
""
]
]
|
1805.07797 | Naveen Sundar Govindarajulu | Naveen Sundar Govindarajulu and Selmer Bringjsord and Rikhiya Ghosh | One Formalization of Virtue Ethics via Learning | IACAP 2018
(http://www.iacap.org/wp-content/uploads/2017/10/IACAP-2018-short-program.pdf) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given that there exist many different formal and precise treatments of
deontologi- cal and consequentialist ethics, we turn to virtue ethics and
consider what could be a formalization of virtue ethics that makes it amenable
to automation. We present an embroyonic formalization in a cognitive calculus
(which subsumes a quantified first-order logic) that has been previously used
to model robust ethical principles, in both the deontological and
consequentialist traditions.
| [
{
"version": "v1",
"created": "Sun, 20 May 2018 17:03:47 GMT"
}
]
| 1,526,947,200,000 | [
[
"Govindarajulu",
"Naveen Sundar",
""
],
[
"Bringjsord",
"Selmer",
""
],
[
"Ghosh",
"Rikhiya",
""
]
]
|
1805.07897 | Roope Tervo | Roope Tervo, Joonas Karjalainen, Alexander Jung | Predicting Electricity Outages Caused by Convective Storms | IEEE DSW 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of predicting power outages in an electrical power
grid due to hazards produced by convective storms. These storms produce extreme
weather phenomena such as intense wind, tornadoes and lightning over a small
area. In this paper, we discuss the application of state-of-the-art machine
learning techniques, such as random forest classifiers and deep neural
networks, to predict the amount of damage caused by storms. We cast this
application as a classification problem where the goal is to classify storm
cells into a finite number of classes, each corresponding to a certain amount
of expected damage. The classification method use as input features estimates
for storm cell location and movement which has to be extracted from the raw
data.
A main challenge of this application is that the training data is heavily
imbalanced as the occurrence of extreme weather events is rare. In order to
address this issue, we applied SMOTE technique.
| [
{
"version": "v1",
"created": "Mon, 21 May 2018 05:28:22 GMT"
}
]
| 1,526,947,200,000 | [
[
"Tervo",
"Roope",
""
],
[
"Karjalainen",
"Joonas",
""
],
[
"Jung",
"Alexander",
""
]
]
|
1805.08256 | Solimul Chowdhury | Md Solimul Chowdhury and Victor Silva | Evolving Real-Time Heuristics Search Algorithms with Building Blocks | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The research area of real-time heuristics search has produced quite many
algorithms. In the landscape of real-time heuristics search research, it is not
rare to find that an algorithm X that appears to perform better than algorithm
Y on a group of problems, performed worse than Y for another group of problems.
If these published algorithms are combined to generate a more powerful space of
algorithms, then that novel space of algorithms may solve a distribution of
problems more efficiently. Based on this intuition, a recent work Bulitko 2016
has defined the task of finding a combination of heuristics search algorithms
as a survival task. In this evolutionary approach, a space of algorithms is
defined over a set of building blocks published algorithms and a simulated
evolution is used to recombine these building blocks to find out the best
algorithm from that space of algorithms.
In this paper, we extend the set of building blocks by adding one published
algorithm, namely lookahead based A-star shaped local search space generation
method from LSSLRTA-star, plus an unpublished novel strategy to generate local
search space with Greedy Best First Search. Then we perform experiments in the
new space of algorithms, which show that the best algorithms selected by the
evolutionary process have the following property: the deeper is the lookahead
depth of an algorithm, the lower is its suboptimality and scrubbing complexity.
| [
{
"version": "v1",
"created": "Mon, 21 May 2018 18:52:00 GMT"
}
]
| 1,527,033,600,000 | [
[
"Chowdhury",
"Md Solimul",
""
],
[
"Silva",
"Victor",
""
]
]
|
1805.08427 | Long Ouyang | Long Ouyang | Bayesian Inference of Regular Expressions from Human-Generated Example
Strings | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In programming by example, users "write" programs by generating a small
number of input-output examples and asking the computer to synthesize
consistent programs. We consider a challenging problem in this domain: learning
regular expressions (regexes) from positive and negative example strings. This
problem is challenging, as (1) user-generated examples may not be informative
enough to sufficiently constrain the hypothesis space, and (2) even if
user-generated examples are in principle informative, there is still a massive
search space to examine. We frame regex induction as the problem of inferring a
probabilistic regular grammar and propose an efficient inference approach that
uses a novel stochastic process recognition model. This model incrementally
"grows" a grammar using positive examples as a scaffold. We show that this
approach is competitive with human ability to learn regexes from examples.
| [
{
"version": "v1",
"created": "Tue, 22 May 2018 07:28:21 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Sep 2018 21:45:53 GMT"
}
]
| 1,538,092,800,000 | [
[
"Ouyang",
"Long",
""
]
]
|
1805.08592 | Susumu Katayama | Susumu Katayama | Computable Variants of AIXI which are More Powerful than AIXItl | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents Unlimited Computable AI, or UCAI, that is a family of
computable variants of AIXI. UCAI is more powerful than AIXItl, that is a
conventional family of computable variants of AIXI, in the following ways: 1)
UCAI supports models of terminating computation, including typed lambda
calculus, while AIXItl only supports Turing machine with timeout t, which can
be simulated by typed lambda calculus for any t; 2) unlike UCAI, AIXItl limits
the program length to l.
| [
{
"version": "v1",
"created": "Tue, 22 May 2018 14:09:49 GMT"
},
{
"version": "v2",
"created": "Thu, 9 Aug 2018 18:11:01 GMT"
},
{
"version": "v3",
"created": "Fri, 25 Jan 2019 10:43:23 GMT"
}
]
| 1,548,633,600,000 | [
[
"Katayama",
"Susumu",
""
]
]
|
1805.08915 | Vahid Behzadan | Vahid Behzadan, Arslan Munir and Roman V. Yampolskiy | A Psychopathological Approach to Safety Engineering in AI and AGI | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The complexity of dynamics in AI techniques is already approaching that of
complex adaptive systems, thus curtailing the feasibility of formal
controllability and reachability analysis in the context of AI safety. It
follows that the envisioned instances of Artificial General Intelligence (AGI)
will also suffer from challenges of complexity. To tackle such issues, we
propose the modeling of deleterious behaviors in AI and AGI as psychological
disorders, thereby enabling the employment of psychopathological approaches to
analysis and control of misbehaviors. Accordingly, we present a discussion on
the feasibility of the psychopathological approaches to AI safety, and propose
general directions for research on modeling, diagnosis, and treatment of
psychological disorders in AGI.
| [
{
"version": "v1",
"created": "Wed, 23 May 2018 00:19:07 GMT"
}
]
| 1,527,120,000,000 | [
[
"Behzadan",
"Vahid",
""
],
[
"Munir",
"Arslan",
""
],
[
"Yampolskiy",
"Roman V.",
""
]
]
|
1805.09169 | Maaz Amjad | Maaz Amjad, fariha Bukhari, Iqra Ameer, Alexander Gelbukh | A distinct approach to diagnose Dengue Fever with the help of Soft Set
Theory | 10 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematics has played a substantial role to revolutionize the medical
science. Intelligent systems based on mathematical theories have proved to be
efficient in diagnosing various diseases. In this paper, we used an expert
system based on soft set theory and fuzzy set theory named as a soft expert
system to diagnose tropical disease dengue. The objective to use soft expert
system is to predict the risk level of a patient having dengue fever by using
input variables like age, TLC, SGOT, platelets count and blood pressure. The
proposed method explicitly demonstrates the exact percentage of the risk level
of dengue fever automatically circumventing for all possible (medical)
imprecisions.
| [
{
"version": "v1",
"created": "Tue, 22 May 2018 16:30:10 GMT"
},
{
"version": "v2",
"created": "Thu, 24 May 2018 06:37:38 GMT"
},
{
"version": "v3",
"created": "Wed, 21 Nov 2018 17:21:26 GMT"
}
]
| 1,542,844,800,000 | [
[
"Amjad",
"Maaz",
""
],
[
"Bukhari",
"fariha",
""
],
[
"Ameer",
"Iqra",
""
],
[
"Gelbukh",
"Alexander",
""
]
]
|
1805.09901 | Oktay Gunluk | Sanjeeb Dash, Oktay G\"unl\"uk and Dennis Wei | Boolean Decision Rules via Column Generation | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper considers the learning of Boolean rules in either disjunctive
normal form (DNF, OR-of-ANDs, equivalent to decision rule sets) or conjunctive
normal form (CNF, AND-of-ORs) as an interpretable model for classification. An
integer program is formulated to optimally trade classification accuracy for
rule simplicity. Column generation (CG) is used to efficiently search over an
exponential number of candidate clauses (conjunctions or disjunctions) without
the need for heuristic rule mining. This approach also bounds the gap between
the selected rule set and the best possible rule set on the training data. To
handle large datasets, we propose an approximate CG algorithm using
randomization. Compared to three recently proposed alternatives, the CG
algorithm dominates the accuracy-simplicity trade-off in 7 out of 15 datasets.
When maximized for accuracy, CG is competitive with rule learners designed for
this purpose, sometimes finding significantly simpler solutions that are no
less accurate.
| [
{
"version": "v1",
"created": "Thu, 24 May 2018 21:12:26 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Aug 2020 23:11:57 GMT"
}
]
| 1,596,758,400,000 | [
[
"Dash",
"Sanjeeb",
""
],
[
"Günlük",
"Oktay",
""
],
[
"Wei",
"Dennis",
""
]
]
|
1805.09975 | Daniel McDuff | Daniel McDuff and Ashish Kapoor | Visceral Machines: Risk-Aversion in Reinforcement Learning with
Intrinsic Physiological Rewards | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As people learn to navigate the world, autonomic nervous system (e.g., "fight
or flight") responses provide intrinsic feedback about the potential
consequence of action choices (e.g., becoming nervous when close to a cliff
edge or driving fast around a bend.) Physiological changes are correlated with
these biological preparations to protect one-self from danger. We present a
novel approach to reinforcement learning that leverages a task-independent
intrinsic reward function trained on peripheral pulse measurements that are
correlated with human autonomic nervous system responses. Our hypothesis is
that such reward functions can circumvent the challenges associated with sparse
and skewed rewards in reinforcement learning settings and can help improve
sample efficiency. We test this in a simulated driving environment and show
that it can increase the speed of learning and reduce the number of collisions
during the learning stage.
| [
{
"version": "v1",
"created": "Fri, 25 May 2018 04:22:31 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Mar 2019 03:30:42 GMT"
}
]
| 1,553,472,000,000 | [
[
"McDuff",
"Daniel",
""
],
[
"Kapoor",
"Ashish",
""
]
]
|
1805.09979 | Armin Haller | Krzysztof Janowicz, Armin Haller, Simon J D Cox, Danh Le Phuoc, Maxime
Lefrancois | SOSA: A Lightweight Ontology for Sensors, Observations, Samples, and
Actuators | null | Journal of Web Semantics, 2018 | 10.1016/j.websem.2018.06.003 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Sensor, Observation, Sample, and Actuator (SOSA) ontology provides a
formal but lightweight general-purpose specification for modeling the
interaction between the entities involved in the acts of observation,
actuation, and sampling. SOSA is the result of rethinking the W3C-XG Semantic
Sensor Network (SSN) ontology based on changes in scope and target audience,
technical developments, and lessons learned over the past years. SOSA also acts
as a replacement of SSN's Stimulus Sensor Observation (SSO) core. It has been
developed by the first joint working group of the Open Geospatial Consortium
(OGC) and the World Wide Web Consortium (W3C) on \emph{Spatial Data on the
Web}. In this work, we motivate the need for SOSA, provide an overview of the
main classes and properties, and briefly discuss its integration with the new
release of the SSN ontology as well as various other alignments to
specifications such as OGC's Observations and Measurements (O\&M),
Dolce-Ultralite (DUL), and other prominent ontologies. We will also touch upon
common modeling problems and application areas related to publishing and
searching observation, sampling, and actuation data on the Web. The SOSA
ontology and standard can be accessed at
\url{https://www.w3.org/TR/vocab-ssn/}.
| [
{
"version": "v1",
"created": "Fri, 25 May 2018 04:41:54 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Dec 2018 10:13:53 GMT"
}
]
| 1,545,868,800,000 | [
[
"Janowicz",
"Krzysztof",
""
],
[
"Haller",
"Armin",
""
],
[
"Cox",
"Simon J D",
""
],
[
"Phuoc",
"Danh Le",
""
],
[
"Lefrancois",
"Maxime",
""
]
]
|
1805.10000 | Yang Yu | Jing-Cheng Shi, Yang Yu, Qing Da, Shi-Yong Chen, An-Xiang Zeng | Virtual-Taobao: Virtualizing Real-world Online Retail Environment for
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applying reinforcement learning in physical-world tasks is extremely
challenging. It is commonly infeasible to sample a large number of trials, as
required by current reinforcement learning methods, in a physical environment.
This paper reports our project on using reinforcement learning for better
commodity search in Taobao, one of the largest online retail platforms and
meanwhile a physical environment with a high sampling cost. Instead of training
reinforcement learning in Taobao directly, we present our approach: first we
build Virtual Taobao, a simulator learned from historical customer behavior
data through the proposed GAN-SD (GAN for Simulating Distributions) and MAIL
(multi-agent adversarial imitation learning), and then we train policies in
Virtual Taobao with no physical costs in which ANC (Action Norm Constraint)
strategy is proposed to reduce over-fitting. In experiments, Virtual Taobao is
trained from hundreds of millions of customers' records, and its properties are
compared with the real environment. The results disclose that Virtual Taobao
faithfully recovers important properties of the real environment. We also show
that the policies trained in Virtual Taobao can have significantly superior
online performance to the traditional supervised approaches. We hope our work
could shed some light on reinforcement learning applications in complex
physical environments.
| [
{
"version": "v1",
"created": "Fri, 25 May 2018 06:39:31 GMT"
}
]
| 1,527,465,600,000 | [
[
"Shi",
"Jing-Cheng",
""
],
[
"Yu",
"Yang",
""
],
[
"Da",
"Qing",
""
],
[
"Chen",
"Shi-Yong",
""
],
[
"Zeng",
"An-Xiang",
""
]
]
|
1805.10461 | V\'ictor Guti\'errez-Basulto | V\'ictor Guti\'errez-Basulto and Steven Schockaert | From Knowledge Graph Embedding to Ontology Embedding? An Analysis of the
Compatibility between Vector Space Representations and Rules | Full version of a paper accepted at KR-2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent years have witnessed the successful application of low-dimensional
vector space representations of knowledge graphs to predict missing facts or
find erroneous ones. However, it is not yet well-understood to what extent
ontological knowledge, e.g. given as a set of (existential) rules, can be
embedded in a principled way. To address this shortcoming, in this paper we
introduce a general framework based on a view of relations as regions, which
allows us to study the compatibility between ontological knowledge and
different types of vector space embeddings. Our technical contribution is
two-fold. First, we show that some of the most popular existing embedding
methods are not capable of modelling even very simple types of rules, which in
particular also means that they are not able to learn the type of dependencies
captured by such rules. Second, we study a model in which relations are
modelled as convex regions. We show particular that ontologies which are
expressed using so-called quasi-chained existential rules can be exactly
represented using convex regions, such that any set of facts which is induced
using that vector space embedding is logically consistent and deductively
closed with respect to the input ontology.
| [
{
"version": "v1",
"created": "Sat, 26 May 2018 10:56:47 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Aug 2018 05:35:21 GMT"
},
{
"version": "v3",
"created": "Tue, 21 Aug 2018 14:21:04 GMT"
}
]
| 1,534,896,000,000 | [
[
"Gutiérrez-Basulto",
"Víctor",
""
],
[
"Schockaert",
"Steven",
""
]
]
|
1805.10587 | Jiewen Wu | Freddy Lecue and Jiewen Wu | Semantic Explanations of Predictions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The main objective of explanations is to transmit knowledge to humans. This
work proposes to construct informative explanations for predictions made from
machine learning models. Motivated by the observations from social sciences,
our approach selects data points from the training sample that exhibit special
characteristics crucial for explanation, for instance, ones contrastive to the
classification prediction and ones representative of the models. Subsequently,
semantic concepts are derived from the selected data points through the use of
domain ontologies. These concepts are filtered and ranked to produce
informative explanations that improves human understanding. The main features
of our approach are that (1) knowledge about explanations is captured in the
form of ontological concepts, (2) explanations include contrastive evidences in
addition to normal evidences, and (3) explanations are user relevant.
| [
{
"version": "v1",
"created": "Sun, 27 May 2018 06:55:15 GMT"
}
]
| 1,527,552,000,000 | [
[
"Lecue",
"Freddy",
""
],
[
"Wu",
"Jiewen",
""
]
]
|
1805.10768 | Heonseok Ha | Heonseok Ha, Uiwon Hwang, Yongjun Hong, Jahee Jang, Sungroh Yoon | Deep Trustworthy Knowledge Tracing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge tracing (KT), a key component of an intelligent tutoring system, is
a machine learning technique that estimates the mastery level of a student
based on his/her past performance. The objective of KT is to predict a
student's response to the next question. Compared with traditional KT models,
deep learning-based KT (DLKT) models show better predictive performance because
of the representation power of deep neural networks. Various methods have been
proposed to improve the performance of DLKT, but few studies have been
conducted on the reliability of DLKT. In this work, we claim that the existing
DLKTs are not reliable in real education environments. To substantiate the
claim, we show limitations of DLKT from various perspectives such as knowledge
state update failure, catastrophic forgetting, and non-interpretability. We
then propose a novel regularization to address these problems. The proposed
method allows us to achieve trustworthy DLKT. In addition, the proposed model
which is trained on scenarios with forgetting can also be easily extended to
scenarios without forgetting.
| [
{
"version": "v1",
"created": "Mon, 28 May 2018 04:50:24 GMT"
},
{
"version": "v2",
"created": "Mon, 1 Oct 2018 02:56:16 GMT"
},
{
"version": "v3",
"created": "Wed, 18 Sep 2019 04:49:53 GMT"
}
]
| 1,568,851,200,000 | [
[
"Ha",
"Heonseok",
""
],
[
"Hwang",
"Uiwon",
""
],
[
"Hong",
"Yongjun",
""
],
[
"Jang",
"Jahee",
""
],
[
"Yoon",
"Sungroh",
""
]
]
|
1805.10820 | Riccardo Guidotti | Riccardo Guidotti, Anna Monreale, Salvatore Ruggieri, Dino Pedreschi,
Franco Turini, Fosca Giannotti | Local Rule-Based Explanations of Black Box Decision Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent years have witnessed the rise of accurate but obscure decision
systems which hide the logic of their internal decision processes to the users.
The lack of explanations for the decisions of black box systems is a key
ethical issue, and a limitation to the adoption of machine learning components
in socially sensitive and safety-critical contexts. %Therefore, we need
explanations that reveals the reasons why a predictor takes a certain decision.
In this paper we focus on the problem of black box outcome explanation, i.e.,
explaining the reasons of the decision taken on a specific instance. We propose
LORE, an agnostic method able to provide interpretable and faithful
explanations. LORE first leans a local interpretable predictor on a synthetic
neighborhood generated by a genetic algorithm. Then it derives from the logic
of the local interpretable predictor a meaningful explanation consisting of: a
decision rule, which explains the reasons of the decision; and a set of
counterfactual rules, suggesting the changes in the instance's features that
lead to a different outcome. Wide experiments show that LORE outperforms
existing methods and baselines both in the quality of explanations and in the
accuracy in mimicking the black box.
| [
{
"version": "v1",
"created": "Mon, 28 May 2018 08:56:40 GMT"
}
]
| 1,527,552,000,000 | [
[
"Guidotti",
"Riccardo",
""
],
[
"Monreale",
"Anna",
""
],
[
"Ruggieri",
"Salvatore",
""
],
[
"Pedreschi",
"Dino",
""
],
[
"Turini",
"Franco",
""
],
[
"Giannotti",
"Fosca",
""
]
]
|
1805.10872 | Robin Manhaeve | Robin Manhaeve, Sebastijan Duman\v{c}i\'c, Angelika Kimmig, Thomas
Demeester, Luc De Raedt | DeepProbLog: Neural Probabilistic Logic Programming | Accepted for spotlight at NeurIPS 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce DeepProbLog, a probabilistic logic programming language that
incorporates deep learning by means of neural predicates. We show how existing
inference and learning techniques can be adapted for the new language. Our
experiments demonstrate that DeepProbLog supports both symbolic and subsymbolic
representations and inference, 1) program induction, 2) probabilistic (logic)
programming, and 3) (deep) learning from examples. To the best of our
knowledge, this work is the first to propose a framework where general-purpose
neural networks and expressive probabilistic-logical modeling and reasoning are
integrated in a way that exploits the full expressiveness and strengths of both
worlds and can be trained end-to-end based on examples.
| [
{
"version": "v1",
"created": "Mon, 28 May 2018 11:33:00 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Dec 2018 09:56:40 GMT"
}
]
| 1,544,659,200,000 | [
[
"Manhaeve",
"Robin",
""
],
[
"Dumančić",
"Sebastijan",
""
],
[
"Kimmig",
"Angelika",
""
],
[
"Demeester",
"Thomas",
""
],
[
"De Raedt",
"Luc",
""
]
]
|
1805.10900 | Richard Forster | Richard Forster, Agnes Fulop | Hierarchical clustering with deep Q-learning | Submitted for review | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction and analyzation of high energy particle physics data is
just as important as the analyzation of the structure in real world networks.
In a previous study it was explored how hierarchical clustering algorithms can
be combined with kt cluster algorithms to provide a more generic clusterization
method. Building on that, this paper explores the possibilities to involve deep
learning in the process of cluster computation, by applying reinforcement
learning techniques. The result is a model, that by learning on a modest
dataset of 10; 000 nodes during 70 epochs can reach 83; 77% precision in
predicting the appropriate clusters.
| [
{
"version": "v1",
"created": "Mon, 28 May 2018 12:59:51 GMT"
}
]
| 1,527,552,000,000 | [
[
"Forster",
"Richard",
""
],
[
"Fulop",
"Agnes",
""
]
]
|
1805.10966 | German I. Parisi | German I. Parisi, Jun Tani, Cornelius Weber, Stefan Wermter | Lifelong Learning of Spatiotemporal Representations with Dual-Memory
Recurrent Self-Organization | null | Parisi GI, Tani J, Weber C and Wermter S (2018) Lifelong Learning
of Spatiotemporal Representations With Dual-Memory Recurrent
Self-Organization. Front. Neurorobot. 12:78 | 10.3389/fnbot.2018.00078 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial autonomous agents and robots interacting in complex environments
are required to continually acquire and fine-tune knowledge over sustained
periods of time. The ability to learn from continuous streams of information is
referred to as lifelong learning and represents a long-standing challenge for
neural network models due to catastrophic forgetting. Computational models of
lifelong learning typically alleviate catastrophic forgetting in experimental
scenarios with given datasets of static images and limited complexity, thereby
differing significantly from the conditions artificial agents are exposed to.
In more natural settings, sequential information may become progressively
available over time and access to previous experience may be restricted. In
this paper, we propose a dual-memory self-organizing architecture for lifelong
learning scenarios. The architecture comprises two growing recurrent networks
with the complementary tasks of learning object instances (episodic memory) and
categories (semantic memory). Both growing networks can expand in response to
novel sensory experience: the episodic memory learns fine-grained
spatiotemporal representations of object instances in an unsupervised fashion
while the semantic memory uses task-relevant signals to regulate structural
plasticity levels and develop more compact representations from episodic
experience. For the consolidation of knowledge in the absence of external
sensory input, the episodic memory periodically replays trajectories of neural
reactivations. We evaluate the proposed model on the CORe50 benchmark dataset
for continuous object recognition, showing that we significantly outperform
current methods of lifelong learning in three different incremental learning
scenarios
| [
{
"version": "v1",
"created": "Mon, 28 May 2018 15:08:19 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Sep 2018 20:17:54 GMT"
},
{
"version": "v3",
"created": "Tue, 30 Oct 2018 18:58:01 GMT"
},
{
"version": "v4",
"created": "Wed, 19 Dec 2018 00:41:53 GMT"
}
]
| 1,545,264,000,000 | [
[
"Parisi",
"German I.",
""
],
[
"Tani",
"Jun",
""
],
[
"Weber",
"Cornelius",
""
],
[
"Wermter",
"Stefan",
""
]
]
|
1805.11375 | Mohit Kumar | Mohit Kumar, Stefano Teso, Luc De Raedt | Automating Personnel Rostering by Learning Constraints Using Tensors | 4 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many problems in operations research require that constraints be specified in
the model. Determining the right constraints is a hard and laborsome task. We
propose an approach to automate this process using artificial intelligence and
machine learning principles. So far there has been only little work on learning
constraints within the operations research community. We focus on personnel
rostering and scheduling problems in which there are often past schedules
available and show that it is possible to automatically learn constraints from
such examples. To realize this, we adapted some techniques from the constraint
programming community and we have extended them in order to cope with
multidimensional examples. The method uses a tensor representation of the
example, which helps in capturing the dimensionality as well as the structure
of the example, and applies tensor operations to find the constraints that are
satisfied by the example. To evaluate the proposed algorithm, we used
constraints from the Nurse Rostering Competition and generated solutions that
satisfy these constraints; these solutions were then used as examples to learn
constraints. Experiments demonstrate that the proposed algorithm is capable of
producing human readable constraints that capture the underlying
characteristics of the examples.
| [
{
"version": "v1",
"created": "Tue, 29 May 2018 12:04:13 GMT"
}
]
| 1,527,638,400,000 | [
[
"Kumar",
"Mohit",
""
],
[
"Teso",
"Stefano",
""
],
[
"De Raedt",
"Luc",
""
]
]
|
1805.11548 | Luchen Li | Luchen Li and Matthieu Komorowski and Aldo A. Faisal | The Actor Search Tree Critic (ASTC) for Off-Policy POMDP Learning in
Medical Decision Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Off-policy reinforcement learning enables near-optimal policy from suboptimal
experience, thereby provisions opportunity for artificial intelligence
applications in healthcare. Previous works have mainly framed patient-clinician
interactions as Markov decision processes, while true physiological states are
not necessarily fully observable from clinical data. We capture this situation
with partially observable Markov decision process, in which an agent optimises
its actions in a belief represented as a distribution of patient states
inferred from individual history trajectories. A Gaussian mixture model is
fitted for the observed data. Moreover, we take into account the fact that
nuance in pharmaceutical dosage could presumably result in significantly
different effect by modelling a continuous policy through a Gaussian
approximator directly in the policy space, i.e. the actor. To address the
challenge of infinite number of possible belief states which renders exact
value iteration intractable, we evaluate and plan for only every encountered
belief, through heuristic search tree by tightly maintaining lower and upper
bounds of the true value of belief. We further resort to function
approximations to update value bounds estimation, i.e. the critic, so that the
tree search can be improved through more compact bounds at the fringe nodes
that will be back-propagated to the root. Both actor and critic parameters are
learned via gradient-based approaches. Our proposed policy trained from real
intensive care unit data is capable of dictating dosing on vasopressors and
intravenous fluids for sepsis patients that lead to the best patient outcomes.
| [
{
"version": "v1",
"created": "Tue, 29 May 2018 15:55:33 GMT"
},
{
"version": "v2",
"created": "Thu, 31 May 2018 09:06:43 GMT"
},
{
"version": "v3",
"created": "Sun, 3 Jun 2018 16:37:31 GMT"
}
]
| 1,528,156,800,000 | [
[
"Li",
"Luchen",
""
],
[
"Komorowski",
"Matthieu",
""
],
[
"Faisal",
"Aldo A.",
""
]
]
|
1805.11555 | Neil Urquhart | Neil Urquhart, Emma Hart | Optimisation and Illumination of a Real-world Workforce Scheduling and
Routing Application via Map-Elites | This is pre-print, a link to the published version will be added when
it is published | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Workforce Scheduling and Routing Problems (WSRP) are very common in many
practical domains, and usually, have a number of objectives. Illumination
algorithms such as Map-Elites (ME) have recently gained traction in application
to {\em design} problems, in providing multiple diverse solutions as well as
illuminating the solution space in terms of user-defined characteristics, but
typically require significant computational effort to produce the solution
archive. We investigate whether ME can provide an effective approach to solving
WSRP, a {\em repetitive} problem in which solutions have to be produced quickly
and often. The goals of the paper are two-fold. The first is to evaluate
whether ME can provide solutions of competitive quality to an Evolutionary
Algorithm (EA) in terms of a single objective function, and the second to
examine its ability to provide a repertoire of solutions that maximise user
choice. We find that very small computational budgets favour the EA in terms of
quality, but ME outperforms the EA at larger budgets, provides a more diverse
array of solutions, and lends insight to the end-user.
| [
{
"version": "v1",
"created": "Tue, 29 May 2018 16:03:06 GMT"
}
]
| 1,527,638,400,000 | [
[
"Urquhart",
"Neil",
""
],
[
"Hart",
"Emma",
""
]
]
|
1805.11648 | Michael Hind | Noel C. F. Codella, Michael Hind, Karthikeyan Natesan Ramamurthy,
Murray Campbell, Amit Dhurandhar, Kush R. Varshney, Dennis Wei, Aleksandra
Mojsilovic | Teaching Meaningful Explanations | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The adoption of machine learning in high-stakes applications such as
healthcare and law has lagged in part because predictions are not accompanied
by explanations comprehensible to the domain user, who often holds the ultimate
responsibility for decisions and outcomes. In this paper, we propose an
approach to generate such explanations in which training data is augmented to
include, in addition to features and labels, explanations elicited from domain
users. A joint model is then learned to produce both labels and explanations
from the input features. This simple idea ensures that explanations are
tailored to the complexity expectations and domain knowledge of the consumer.
Evaluation spans multiple modeling techniques on a game dataset, a (visual)
aesthetics dataset, a chemical odor dataset and a Melanoma dataset showing that
our approach is generalizable across domains and algorithms. Results
demonstrate that meaningful explanations can be reliably taught to machine
learning algorithms, and in some cases, also improve modeling accuracy.
| [
{
"version": "v1",
"created": "Tue, 29 May 2018 18:35:44 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Sep 2018 01:44:51 GMT"
}
]
| 1,536,710,400,000 | [
[
"Codella",
"Noel C. F.",
""
],
[
"Hind",
"Michael",
""
],
[
"Ramamurthy",
"Karthikeyan Natesan",
""
],
[
"Campbell",
"Murray",
""
],
[
"Dhurandhar",
"Amit",
""
],
[
"Varshney",
"Kush R.",
""
],
[
"Wei",
"Dennis",
""
],
[
"Mojsilovic",
"Aleksandra",
""
]
]
|
1805.11768 | Michael Green | Michael Cerny Green, Ahmed Khalifa, Gabriella A. B. Barros, and Julian
Togelius | "Press Space to Fire": Automatic Video Game Tutorial Generation | 6 pages, 4 figures, 1 table, Published at the EXAG workshop as a part
of AIIDE 2017 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose the problem of tutorial generation for games, i.e. to generate
tutorials which can teach players to play games, as an AI problem. This problem
can be approached in several ways, including generating natural language
descriptions of game rules, generating instructive game levels, and generating
demonstrations of how to play a game using agents that play in a human-like
manner. We further argue that the General Video Game AI framework provides a
useful testbed for addressing this problem.
| [
{
"version": "v1",
"created": "Wed, 30 May 2018 01:21:33 GMT"
}
]
| 1,527,724,800,000 | [
[
"Green",
"Michael Cerny",
""
],
[
"Khalifa",
"Ahmed",
""
],
[
"Barros",
"Gabriella A. B.",
""
],
[
"Togelius",
"Julian",
""
]
]
|
1805.11820 | Christian Blum | Christian Blum and Haroldo Gambini Santos | Generic CP-Supported CMSA for Binary Integer Linear Programs | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Construct, Merge, Solve and Adapt (CMSA) is a general hybrid metaheuristic
for solving combinatorial optimization problems. At each iteration, CMSA (1)
constructs feasible solutions to the tackled problem instance in a
probabilistic way and (2) solves a reduced problem instance (if possible) to
optimality. The construction of feasible solutions is hereby problem-specific,
usually involving a fast greedy heuristic. The goal of this paper is to design
a problem-agnostic CMSA variant whose exclusive input is an integer linear
program (ILP). In order to reduce the complexity of this task, the current
study is restricted to binary ILPs. In addition to a basic problem-agnostic
CMSA variant, we also present an extended version that makes use of a
constraint propagation engine for constructing solutions. The results show that
our technique is able to match the upper bounds of the standalone application
of CPLEX in the context of rather easy-to-solve instances, while it generally
outperforms the standalone application of CPLEX in the context of hard
instances. Moreover, the results indicate that the support of the constraint
propagation engine is useful in the context of problems for which finding
feasible solutions is rather difficult.
| [
{
"version": "v1",
"created": "Wed, 30 May 2018 06:22:34 GMT"
}
]
| 1,527,724,800,000 | [
[
"Blum",
"Christian",
""
],
[
"Santos",
"Haroldo Gambini",
""
]
]
|
1805.12069 | Eray Ozkural | Eray \"Ozkural | Omega: An Architecture for AI Unification | This is a high-level overview of the Omega AGI architecture which is
the basis of a data science automation system. Submitted to a workshop | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the open-ended, modular, self-improving Omega AI unification
architecture which is a refinement of Solomonoff's Alpha architecture, as
considered from first principles. The architecture embodies several crucial
principles of general intelligence including diversity of representations,
diversity of data types, integrated memory, modularity, and higher-order
cognition. We retain the basic design of a fundamental algorithmic substrate
called an "AI kernel" for problem solving and basic cognitive functions like
memory, and a larger, modular architecture that re-uses the kernel in many
ways. Omega includes eight representation languages and six classes of neural
networks, which are briefly introduced. The architecture is intended to
initially address data science automation, hence it includes many problem
solving methods for statistical tasks. We review the broad software
architecture, higher-order cognition, self-improvement, modular neural
architectures, intelligent agents, the process and memory hierarchy, hardware
abstraction, peer-to-peer computing, and data abstraction facility.
| [
{
"version": "v1",
"created": "Wed, 16 May 2018 22:08:28 GMT"
}
]
| 1,527,724,800,000 | [
[
"Özkural",
"Eray",
""
]
]
|
1805.12402 | Ernesto Jimenez-Ruiz | Ernesto Jimenez-Ruiz and Asan Agibetov and Matthias Samwald and
Valerie Cross | Breaking-down the Ontology Alignment Task with a Lexical Index and
Neural Embeddings | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large ontologies still pose serious challenges to state-of-the-art ontology
alignment systems. In the paper we present an approach that combines a lexical
index, a neural embedding model and locality modules to effectively divide an
input ontology matching task into smaller and more tractable matching
(sub)tasks. We have conducted a comprehensive evaluation using the datasets of
the Ontology Alignment Evaluation Initiative. The results are encouraging and
suggest that the proposed methods are adequate in practice and can be
integrated within the workflow of state-of-the-art systems.
| [
{
"version": "v1",
"created": "Thu, 31 May 2018 09:57:01 GMT"
}
]
| 1,527,811,200,000 | [
[
"Jimenez-Ruiz",
"Ernesto",
""
],
[
"Agibetov",
"Asan",
""
],
[
"Samwald",
"Matthias",
""
],
[
"Cross",
"Valerie",
""
]
]
|
1805.12495 | Reza Shahbazi | Reza Shahbazi | Invariant Representation of Mathematical Expressions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While there exist many methods in machine learning for comparison of letter
string data, most are better equipped to handle strings that represent natural
language, and their performance will not hold up when presented with strings
that correspond to mathematical expressions. Based on the graphical
representation of the expression tree, here we propose a simple method for
encoding such expressions that is only sensitive to their structural
properties, and invariant to the specifics which can vary between two seemingly
different, but semantically similar mathematical expressions.
| [
{
"version": "v1",
"created": "Tue, 29 May 2018 22:48:59 GMT"
},
{
"version": "v2",
"created": "Thu, 11 Feb 2021 20:44:04 GMT"
}
]
| 1,613,347,200,000 | [
[
"Shahbazi",
"Reza",
""
]
]
|
1805.12565 | Tal Friedman | Tal Friedman, Guy Van den Broeck | Approximate Knowledge Compilation by Online Collapsed Importance
Sampling | paper + supplementary material | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce collapsed compilation, a novel approximate inference algorithm
for discrete probabilistic graphical models. It is a collapsed sampling
algorithm that incrementally selects which variable to sample next based on the
partial sample obtained so far. This online collapsing, together with knowledge
compilation inference on the remaining variables, naturally exploits local
structure and context- specific independence in the distribution. These
properties are naturally exploited in exact inference, but are difficult to
harness for approximate inference. More- over, by having a partially compiled
circuit available during sampling, collapsed compilation has access to a highly
effective proposal distribution for importance sampling. Our experimental
evaluation shows that collapsed compilation performs well on standard
benchmarks. In particular, when the amount of exact inference is equally
limited, collapsed compilation is competitive with the state of the art, and
outperforms it on several benchmarks.
| [
{
"version": "v1",
"created": "Thu, 31 May 2018 17:13:13 GMT"
}
]
| 1,527,811,200,000 | [
[
"Friedman",
"Tal",
""
],
[
"Broeck",
"Guy Van den",
""
]
]
|
1806.00119 | Christoph Redl | Christoph Redl | Technical Report: Inconsistency in Answer Set Programs and Extensions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Answer Set Programming (ASP) is a well-known problem solving approach based
on nonmonotonic logic programs. HEX-programs extend ASP with external atoms for
accessing arbitrary external information, which can introduce values that do
not appear in the input program. In this work we consider inconsistent ASP- and
HEX-programs, i.e., programs without answer sets. We study characterizations of
inconsistency, introduce a novel notion for explaining inconsistencies in terms
of input facts, analyze the complexity of reasoning tasks in context of
inconsistency analysis, and present techniques for computing inconsistency
reasons. This theoretical work is motivated by two concrete applications, which
we also present. The first one is the new modeling technique of query answering
over subprograms as a convenient alternative to the well-known saturation
technique. The second application is a new evaluation algorithm for
HEX-programs based on conflict-driven learning for programs with multiple
components: while for certain program classes previous techniques suffer an
evaluation bottleneck, the new approach shows significant, potentially
exponential speedup in our experiments. Since well-known ASP extensions such as
constraint ASP and DL-programs correspond to special cases of HEX, all
presented results are interesting beyond the specific formalism.
| [
{
"version": "v1",
"created": "Thu, 31 May 2018 22:11:21 GMT"
}
]
| 1,528,070,400,000 | [
[
"Redl",
"Christoph",
""
]
]
|
1806.00175 | Ramtin Keramati | Ramtin Keramati, Jay Whang, Patrick Cho, Emma Brunskill | Fast Exploration with Simplified Models and Approximately Optimistic
Planning in Model Based Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans learn to play video games significantly faster than the
state-of-the-art reinforcement learning (RL) algorithms. People seem to build
simple models that are easy to learn to support planning and strategic
exploration. Inspired by this, we investigate two issues in leveraging
model-based RL for sample efficiency. First we investigate how to perform
strategic exploration when exact planning is not feasible and empirically show
that optimistic Monte Carlo Tree Search outperforms posterior sampling methods.
Second we show how to learn simple deterministic models to support fast
learning using object representation. We illustrate the benefit of these ideas
by introducing a novel algorithm, Strategic Object Oriented Reinforcement
Learning (SOORL), that outperforms state-of-the-art algorithms in the game of
Pitfall! in less than 50 episodes.
| [
{
"version": "v1",
"created": "Fri, 1 Jun 2018 02:54:06 GMT"
},
{
"version": "v2",
"created": "Mon, 26 Nov 2018 04:56:07 GMT"
}
]
| 1,543,276,800,000 | [
[
"Keramati",
"Ramtin",
""
],
[
"Whang",
"Jay",
""
],
[
"Cho",
"Patrick",
""
],
[
"Brunskill",
"Emma",
""
]
]
|
1806.00340 | Luke Oakden-Rayner | William Gale, Luke Oakden-Rayner, Gustavo Carneiro, Andrew P Bradley,
Lyle J Palmer | Producing radiologist-quality reports for interpretable artificial
intelligence | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current approaches to explaining the decisions of deep learning systems for
medical tasks have focused on visualising the elements that have contributed to
each decision. We argue that such approaches are not enough to "open the black
box" of medical decision making systems because they are missing a key
component that has been used as a standard communication tool between doctors
for centuries: language. We propose a model-agnostic interpretability method
that involves training a simple recurrent neural network model to produce
descriptive sentences to clarify the decision of deep learning classifiers.
We test our method on the task of detecting hip fractures from frontal pelvic
x-rays. This process requires minimal additional labelling despite producing
text containing elements that the original deep learning classification model
was not specifically trained to detect.
The experimental results show that: 1) the sentences produced by our method
consistently contain the desired information, 2) the generated sentences are
preferred by doctors compared to current tools that create saliency maps, and
3) the combination of visualisations and generated text is better than either
alone.
| [
{
"version": "v1",
"created": "Fri, 1 Jun 2018 13:47:12 GMT"
}
]
| 1,528,070,400,000 | [
[
"Gale",
"William",
""
],
[
"Oakden-Rayner",
"Luke",
""
],
[
"Carneiro",
"Gustavo",
""
],
[
"Bradley",
"Andrew P",
""
],
[
"Palmer",
"Lyle J",
""
]
]
|
1806.00352 | Mieczys{\l}aw K{\l}opotek | Mieczys{\l}aw A. K{\l}opotek | Too Fast Causal Inference under Causal Insufficiency | 40 pages. arXiv admin note: text overlap with arXiv:1705.10308 | null | null | ICS-PAS Reports 761/94 | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Causally insufficient structures (models with latent or hidden variables, or
with confounding etc.) of joint probability distributions have been subject of
intense study not only in statistics, but also in various AI systems. In AI,
belief networks, being representations of joint probability distribution with
an underlying directed acyclic graph structure, are paid special attention due
to the fact that efficient reasoning (uncertainty propagation) methods have
been developed for belief network structures. Algorithms have been therefore
developed to acquire the belief network structure from data. As artifacts due
to variable hiding negatively influence the performance of derived belief
networks, models with latent variables have been studied and several algorithms
for learning belief network structure under causal insufficiency have also been
developed.
Regrettably, some of them are known already to be erroneous (e.g. IC
algorithm of [Pearl:Verma:91]. This paper is devoted to another algorithm, the
Fast Causal Inference (FCI) Algorithm of [Spirtes:93]. It is proven by a
specially constructed example that this algorithm, as it stands in
[Spirtes:93], is also erroneous. Fundamental reason for failure of this
algorithm is the temporary introduction of non-real links between nodes of the
network with the intention of later removal. While for trivial dependency
structures these non-real links may be actually removed, this may not be the
case for complex ones, e.g. for the case described in this paper. A remedy of
this failure is proposed.
| [
{
"version": "v1",
"created": "Wed, 30 May 2018 19:32:39 GMT"
}
]
| 1,528,070,400,000 | [
[
"Kłopotek",
"Mieczysław A.",
""
]
]
|
1806.00553 | Christopher Stanton | Christopher Stanton and Jeff Clune | Deep Curiosity Search: Intra-Life Exploration Can Improve Performance on
Challenging Deep Reinforcement Learning Problems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional exploration methods in RL require agents to perform random
actions to find rewards. But these approaches struggle on sparse-reward domains
like Montezuma's Revenge where the probability that any random action sequence
leads to reward is extremely low. Recent algorithms have performed well on such
tasks by encouraging agents to visit new states or perform new actions in
relation to all prior training episodes (which we call across-training
novelty). But such algorithms do not consider whether an agent exhibits
intra-life novelty: doing something new within the current episode, regardless
of whether those behaviors have been performed in previous episodes. We
hypothesize that across-training novelty might discourage agents from
revisiting initially non-rewarding states that could become important stepping
stones later in training. We introduce Deep Curiosity Search (DeepCS), which
encourages intra-life exploration by rewarding agents for visiting as many
different states as possible within each episode, and show that DeepCS matches
the performance of current state-of-the-art methods on Montezuma's Revenge. We
further show that DeepCS improves exploration on Amidar, Freeway, Gravitar, and
Tutankham (many of which are hard exploration games). Surprisingly, DeepCS
doubles A2C performance on Seaquest, a game we would not have expected to
benefit from intra-life exploration because the arena is small and already
easily navigated by naive exploration techniques. In one run, DeepCS achieves a
maximum training score of 80,000 points on Seaquest, higher than any methods
other than Ape-X. The strong performance of DeepCS on these sparse- and
dense-reward tasks suggests that encouraging intra-life novelty is an
interesting, new approach for improving performance in Deep RL and motivates
further research into hybridizing across-training and intra-life exploration
methods.
| [
{
"version": "v1",
"created": "Fri, 1 Jun 2018 22:09:51 GMT"
},
{
"version": "v2",
"created": "Mon, 22 Oct 2018 17:23:01 GMT"
},
{
"version": "v3",
"created": "Sat, 24 Nov 2018 00:29:31 GMT"
}
]
| 1,543,276,800,000 | [
[
"Stanton",
"Christopher",
""
],
[
"Clune",
"Jeff",
""
]
]
|
1806.00610 | Fernando Mart\'inez Plumed | Fernando Mart\'inez-Plumed, Shahar Avin, Miles Brundage, Allan Dafoe,
Sean \'O h\'Eigeartaigh, Jos\'e Hern\'andez-Orallo | Between Progress and Potential Impact of AI: the Neglected Dimensions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We reframe the analysis of progress in AI by incorporating into an overall
framework both the task performance of a system, and the time and resource
costs incurred in the development and deployment of the system. These costs
include: data, expert knowledge, human oversight, software resources, computing
cycles, hardware and network facilities, and (what kind of) time. These costs
are distributed over the life cycle of the system, and may place differing
demands on different developers and users. The multidimensional performance and
cost space we present can be collapsed to a single utility metric that measures
the value of the system for different stakeholders. Even without a single
utility function, AI advances can be generically assessed by whether they
expand the Pareto surface. We label these types of costs as neglected
dimensions of AI progress, and explore them using four case studies: Alpha*
(Go, Chess, and other board games), ALE (Atari games), ImageNet (Image
classification) and Virtual Personal Assistants (Siri, Alexa, Cortana, and
Google Assistant). This broader model of progress in AI will lead to novel ways
of estimating the potential societal use and impact of an AI system, and the
establishment of milestones for future progress.
| [
{
"version": "v1",
"created": "Sat, 2 Jun 2018 09:21:12 GMT"
},
{
"version": "v2",
"created": "Sat, 2 Jul 2022 09:54:55 GMT"
}
]
| 1,656,979,200,000 | [
[
"Martínez-Plumed",
"Fernando",
""
],
[
"Avin",
"Shahar",
""
],
[
"Brundage",
"Miles",
""
],
[
"Dafoe",
"Allan",
""
],
[
"hÉigeartaigh",
"Sean Ó",
""
],
[
"Hernández-Orallo",
"José",
""
]
]
|
1806.00683 | Vijaya Sai Krishna Gottipati | Sai Krishna G.V., Kyle Goyette, Ahmad Chamseddine, Breandan Considine | Deep Pepper: Expert Iteration based Chess agent in the Reinforcement
Learning Setting | Tabula Rasa, Chess engine, Learning Fast and Slow, Reinforcement
Learning, Alpha Zero | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An almost-perfect chess playing agent has been a long standing challenge in
the field of Artificial Intelligence. Some of the recent advances demonstrate
we are approaching that goal. In this project, we provide methods for faster
training of self-play style algorithms, mathematical details of the algorithm
used, various potential future directions, and discuss most of the relevant
work in the area of computer chess. Deep Pepper uses embedded knowledge to
accelerate the training of the chess engine over a "tabula rasa" system such as
Alpha Zero. We also release our code to promote further research.
| [
{
"version": "v1",
"created": "Sat, 2 Jun 2018 18:35:37 GMT"
},
{
"version": "v2",
"created": "Wed, 17 Oct 2018 21:01:36 GMT"
}
]
| 1,539,907,200,000 | [
[
"V.",
"Sai Krishna G.",
""
],
[
"Goyette",
"Kyle",
""
],
[
"Chamseddine",
"Ahmad",
""
],
[
"Considine",
"Breandan",
""
]
]
|
1806.00882 | Mohammad-Ali Javidian | Mohammad Ali Javidian and Marco Valtorta | Structural Learning of Multivariate Regression Chain Graphs via
Decomposition | 19 pages, 6 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We extend the decomposition approach for learning Bayesian networks (BNs)
proposed by (Xie et. al.) to learning multivariate regression chain graphs (MVR
CGs), which include BNs as a special case. The same advantages of this
decomposition approach hold in the more general setting: reduced complexity and
increased power of computational independence tests. Moreover, latent (hidden)
variables can be represented in MVR CGs by using bidirected edges, and our
algorithm correctly recovers any independence structure that is faithful to an
MVR CG, thus greatly extending the range of applications of decomposition-based
model selection techniques. Simulations under a variety of settings demonstrate
the competitive performance of our method in comparison with the PC-like
algorithm (Sonntag and Pena). In fact, the decomposition-based algorithm
usually outperforms the PC-like algorithm except in running time. The
performance of both algorithms is much better when the underlying graph is
sparse.
| [
{
"version": "v1",
"created": "Sun, 3 Jun 2018 21:26:36 GMT"
},
{
"version": "v2",
"created": "Mon, 24 Feb 2020 19:08:21 GMT"
}
]
| 1,582,675,200,000 | [
[
"Javidian",
"Mohammad Ali",
""
],
[
"Valtorta",
"Marco",
""
]
]
|
1806.01130 | Ludmila Kuncheva | Julian Zubek and Ludmila Kuncheva | Learning from Exemplars and Prototypes in Machine Learning and
Psychology | 17 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper draws a parallel between similarity-based categorisation models
developed in cognitive psychology and the nearest neighbour classifier (1-NN)
in machine learning. Conceived as a result of the historical rivalry between
prototype theories (abstraction) and exemplar theories (memorisation), recent
models of human categorisation seek a compromise in-between. Regarding the
stimuli (entities to be categorised) as points in a metric space, machine
learning offers a large collection of methods to select a small, representative
and discriminative point set. These methods are known under various names:
instance selection, data editing, prototype selection, prototype generation or
prototype replacement. The nearest neighbour classifier is used with the
selected reference set. Such a set can be interpreted as a data-driven
categorisation model. We juxtapose the models from the two fields to enable
cross-referencing. We believe that both machine learning and cognitive
psychology can draw inspiration from the comparison and enrich their repertoire
of similarity-based models.
| [
{
"version": "v1",
"created": "Mon, 4 Jun 2018 14:05:07 GMT"
}
]
| 1,528,156,800,000 | [
[
"Zubek",
"Julian",
""
],
[
"Kuncheva",
"Ludmila",
""
]
]
|
1806.01151 | Ivan Bravi | Ivan Bravi, Jialin Liu, Diego Perez-Liebana, Simon Lucas | Shallow decision-making analysis in General Video Game Playing | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The General Video Game AI competitions have been the testing ground for
several techniques for game playing, such as evolutionary computation
techniques, tree search algorithms, hyper heuristic based or knowledge based
algorithms. So far the metrics used to evaluate the performance of agents have
been win ratio, game score and length of games. In this paper we provide a
wider set of metrics and a comparison method for evaluating and comparing
agents. The metrics and the comparison method give shallow introspection into
the agent's decision making process and they can be applied to any agent
regardless of its algorithmic nature. In this work, the metrics and the
comparison method are used to measure the impact of the terms that compose a
tree policy of an MCTS based agent, comparing with several baseline agents. The
results clearly show how promising such general approach is and how it can be
useful to understand the behaviour of an AI agent, in particular, how the
comparison with baseline agents can help understanding the shape of the agent
decision landscape. The presented metrics and comparison method represent a
step toward to more descriptive ways of logging and analysing agent's
behaviours.
| [
{
"version": "v1",
"created": "Mon, 4 Jun 2018 14:47:46 GMT"
}
]
| 1,528,156,800,000 | [
[
"Bravi",
"Ivan",
""
],
[
"Liu",
"Jialin",
""
],
[
"Perez-Liebana",
"Diego",
""
],
[
"Lucas",
"Simon",
""
]
]
|
1806.01387 | Christian Guckelsberger | Christian Guckelsberger, Christoph Salge, Julian Togelius | New And Surprising Ways to Be Mean. Adversarial NPCs with Coupled
Empowerment Minimisation | IEEE Computational Intelligence and Games (CIG) conference, 2018,
Maastricht. 8 pages, 6 figures, 2 tables | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Creating Non-Player Characters (NPCs) that can react robustly to unforeseen
player behaviour or novel game content is difficult and time-consuming. This
hinders the design of believable characters, and the inclusion of NPCs in games
that rely heavily on procedural content generation. We have previously
addressed this challenge by means of empowerment, a model of intrinsic
motivation, and demonstrated how a coupled empowerment maximisation (CEM)
policy can yield generic, companion-like behaviour. In this paper, we extend
the CEM framework with a minimisation policy to give rise to adversarial
behaviour. We conduct a qualitative, exploratory study in a dungeon-crawler
game, demonstrating that CEM can exploit the affordances of different content
facets in adaptive adversarial behaviour without modifications to the policy.
Changes to the level design, underlying mechanics and our character's actions
do not threaten our NPC's robustness, but yield new and surprising ways to be
mean.
| [
{
"version": "v1",
"created": "Mon, 4 Jun 2018 21:02:49 GMT"
}
]
| 1,528,243,200,000 | [
[
"Guckelsberger",
"Christian",
""
],
[
"Salge",
"Christoph",
""
],
[
"Togelius",
"Julian",
""
]
]
|
1806.01709 | Leonidas Doumas | Leonidas A. A. Doumas, Guillermo Puebla, Andrea E. Martin | Human-like generalization in a machine through predicate learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans readily generalize, applying prior knowledge to novel situations and
stimuli. Advances in machine learning and artificial intelligence have begun to
approximate and even surpass human performance, but machine systems reliably
struggle to generalize information to untrained situations. We describe a
neural network model that is trained to play one video game (Breakout) and
demonstrates one-shot generalization to a new game (Pong). The model
generalizes by learning representations that are functionally and formally
symbolic from training data, without feedback, and without requiring that
structured representations be specified a priori. The model uses unsupervised
comparison to discover which characteristics of the input are invariant, and to
learn relational predicates; it then applies these predicates to arguments in a
symbolic fashion, using oscillatory regularities in network firing to
dynamically bind predicates to arguments. We argue that models of human
cognition must account for far-reaching and flexible generalization, and that
in order to do so, models must be able to discover symbolic representations
from unstructured data, a process we call predicate learning. Only then can
models begin to adequately explain where human-like representations come from,
why human cognition is the way it is, and why it continues to differ from
machine intelligence in crucial ways.
| [
{
"version": "v1",
"created": "Tue, 5 Jun 2018 14:21:20 GMT"
},
{
"version": "v2",
"created": "Wed, 6 Jun 2018 20:05:11 GMT"
},
{
"version": "v3",
"created": "Thu, 7 Mar 2019 10:41:38 GMT"
}
]
| 1,552,003,200,000 | [
[
"Doumas",
"Leonidas A. A.",
""
],
[
"Puebla",
"Guillermo",
""
],
[
"Martin",
"Andrea E.",
""
]
]
|
1806.01756 | Daniel T Chang | Daniel T Chang | Concept-Oriented Deep Learning | 11 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concepts are the foundation of human deep learning, understanding, and
knowledge integration and transfer. We propose concept-oriented deep learning
(CODL) which extends (machine) deep learning with concept representations and
conceptual understanding capability. CODL addresses some of the major
limitations of deep learning: interpretability, transferability, contextual
adaptation, and requirement for lots of labeled training data. We discuss the
major aspects of CODL including concept graph, concept representations, concept
exemplars, and concept representation learning systems supporting incremental
and continual learning.
| [
{
"version": "v1",
"created": "Tue, 5 Jun 2018 15:50:30 GMT"
}
]
| 1,528,243,200,000 | [
[
"Chang",
"Daniel T",
""
]
]
|
1806.01825 | G. Zacharias Holland | G. Zacharias Holland, Erin J. Talvitie, and Michael Bowling | The Effect of Planning Shape on Dyna-style Planning in High-dimensional
State Spaces | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dyna is a fundamental approach to model-based reinforcement learning (MBRL)
that interleaves planning, acting, and learning in an online setting. In the
most typical application of Dyna, the dynamics model is used to generate
one-step transitions from selected start states from the agent's history, which
are used to update the agent's value function or policy as if they were real
experiences. In this work, one-step Dyna was applied to several games from the
Arcade Learning Environment (ALE). We found that the model-based updates
offered surprisingly little benefit over simply performing more updates with
the agent's existing experience, even when using a perfect model. We
hypothesize that to get the most from planning, the model must be used to
generate unfamiliar experience. To test this, we experimented with the "shape"
of planning in multiple different concrete instantiations of Dyna, performing
fewer, longer rollouts, rather than many short rollouts. We found that planning
shape has a profound impact on the efficacy of Dyna for both perfect and
learned models. In addition to these findings regarding Dyna in general, our
results represent, to our knowledge, the first time that a learned dynamics
model has been successfully used for planning in the ALE, suggesting that Dyna
may be a viable approach to MBRL in the ALE and other high-dimensional
problems.
| [
{
"version": "v1",
"created": "Tue, 5 Jun 2018 17:31:02 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jun 2018 20:46:56 GMT"
},
{
"version": "v3",
"created": "Fri, 29 Mar 2019 03:00:57 GMT"
}
]
| 1,554,076,800,000 | [
[
"Holland",
"G. Zacharias",
""
],
[
"Talvitie",
"Erin J.",
""
],
[
"Bowling",
"Michael",
""
]
]
|
1806.02091 | Andreas Hein M. | Andreas Makoto Hein, H\'el\`ene Condat | Can Machines Design? An Artificial General Intelligence Approach | null | null | 10.13140/RG.2.2.24564.45448 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can machines design? Can they come up with creative solutions to problems and
build tools and artifacts across a wide range of domains? Recent advances in
the field of computational creativity and formal Artificial General
Intelligence (AGI) provide frameworks for machines with the general ability to
design. In this paper we propose to integrate a formal computational creativity
framework into the G\"odel machine framework. We call the resulting framework
design G\"odel machine. Such a machine could solve a variety of design problems
by generating novel concepts. In addition, it could change the way these
concepts are generated by modifying itself. The design G\"odel machine is able
to improve its initial design program, once it has proven that a modification
would increase its return on the utility function. Finally, we sketch out a
specific version of the design G\"odel machine which specifically addresses the
design of complex software and hardware systems. Future work aims at the
development of a more formal version of the design G\"odel machine and a proof
of concept implementation.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 09:41:58 GMT"
},
{
"version": "v2",
"created": "Thu, 7 Jun 2018 15:24:42 GMT"
},
{
"version": "v3",
"created": "Fri, 22 Jun 2018 22:42:46 GMT"
},
{
"version": "v4",
"created": "Tue, 26 Jun 2018 08:23:40 GMT"
}
]
| 1,530,057,600,000 | [
[
"Hein",
"Andreas Makoto",
""
],
[
"Condat",
"Hélène",
""
]
]
|
1806.02127 | Lavindra de Silva | Lavindra de Silva | Addendum to "HTN Acting: A Formalism and an Algorithm" | This paper is a more detailed version of the following publication:
Lavindra de Silva, "HTN Acting: A Formalism and an Algorithm", in Proceedings
of AAMAS 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hierarchical Task Network (HTN) planning is a practical and efficient
approach to planning when the 'standard operating procedures' for a domain are
available. Like Belief-Desire-Intention (BDI) agent reasoning, HTN planning
performs hierarchical and context-based refinement of goals into subgoals and
basic actions. However, while HTN planners 'lookahead' over the consequences of
choosing one refinement over another, BDI agents interleave refinement with
acting. There has been renewed interest in making HTN planners behave more like
BDI agent systems, e.g. to have a unified representation for acting and
planning. However, past work on the subject has remained informal or
implementation-focused. This paper is a formal account of 'HTN acting', which
supports interleaved deliberation, acting, and failure recovery. We use the
syntax of the most general HTN planning formalism and build on its core
semantics, and we provide an algorithm which combines our new formalism with
the processing of exogenous events. We also study the properties of HTN acting
and its relation to HTN planning.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 11:33:26 GMT"
},
{
"version": "v2",
"created": "Sun, 4 Jul 2021 17:49:20 GMT"
}
]
| 1,625,529,600,000 | [
[
"de Silva",
"Lavindra",
""
]
]
|
1806.02180 | Chun Kit Yeung | Chun-Kit Yeung and Dit-Yan Yeung | Addressing Two Problems in Deep Knowledge Tracing via
Prediction-Consistent Regularization | 10 pages, L@S 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge tracing is one of the key research areas for empowering
personalized education. It is a task to model students' mastery level of a
knowledge component (KC) based on their historical learning trajectories. In
recent years, a recurrent neural network model called deep knowledge tracing
(DKT) has been proposed to handle the knowledge tracing task and literature has
shown that DKT generally outperforms traditional methods. However, through our
extensive experimentation, we have noticed two major problems in the DKT model.
The first problem is that the model fails to reconstruct the observed input. As
a result, even when a student performs well on a KC, the prediction of that
KC's mastery level decreases instead, and vice versa. Second, the predicted
performance for KCs across time-steps is not consistent. This is undesirable
and unreasonable because student's performance is expected to transit gradually
over time. To address these problems, we introduce regularization terms that
correspond to reconstruction and waviness to the loss function of the original
DKT model to enhance the consistency in prediction. Experiments show that the
regularized loss function effectively alleviates the two problems without
degrading the original task of DKT.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 13:41:48 GMT"
}
]
| 1,528,329,600,000 | [
[
"Yeung",
"Chun-Kit",
""
],
[
"Yeung",
"Dit-Yan",
""
]
]
|
1806.02308 | Hector Geffner | Hector Geffner | Model-free, Model-based, and General Intelligence | null | Invited talk. IJCAI 2018 | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | During the 60s and 70s, AI researchers explored intuitions about intelligence
by writing programs that displayed intelligent behavior. Many good ideas came
out from this work but programs written by hand were not robust or general.
After the 80s, research increasingly shifted to the development of learners
capable of inferring behavior and functions from experience and data, and
solvers capable of tackling well-defined but intractable models like SAT,
classical planning, Bayesian networks, and POMDPs. The learning approach has
achieved considerable success but results in black boxes that do not have the
flexibility, transparency, and generality of their model-based counterparts.
Model-based approaches, on the other hand, require models and scalable
algorithms. Model-free learners and model-based solvers have close parallels
with Systems 1 and 2 in current theories of the human mind: the first, a fast,
opaque, and inflexible intuitive mind; the second, a slow, transparent, and
flexible analytical mind. In this paper, I review developments in AI and draw
on these theories to discuss the gap between model-free learners and
model-based solvers, a gap that needs to be bridged in order to have
intelligent systems that are robust and general.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 17:15:27 GMT"
}
]
| 1,528,329,600,000 | [
[
"Geffner",
"Hector",
""
]
]
|
1806.02373 | Mieczys{\l}aw K{\l}opotek | Mieczys{\l}aw A. K{\l}opotek | Dempsterian-Shaferian Belief Network From Data | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shenoy and Shafer {Shenoy:90} demonstrated that both for Dempster-Shafer
Theory and probability theory there exists a possibility to calculate
efficiently marginals of joint belief distributions (by so-called local
computations) provided that the joint distribution can be decomposed
(factorized) into a belief network. A number of algorithms exists for
decomposition of probabilistic joint belief distribution into a bayesian
(belief) network from data. For example
Spirtes, Glymour and Schein{Spirtes:90b} formulated a Conjecture that a
direct dependence test and a head-to-head meeting test would suffice to
construe bayesian network from data in such a way that Pearl's concept of
d-separation {Geiger:90} applies.
This paper is intended to transfer Spirtes, Glymour and Scheines
{Spirtes:90b} approach onto the ground of the Dempster-Shafer Theory (DST). For
this purpose, a frequentionistic interpretation of the DST developed in
{Klopotek:93b} is exploited. A special notion of conditionality for DST is
introduced and demonstrated to behave with respect to Pearl's d-separation
{Geiger:90} much the same way as conditional probability (though some
differences like non-uniqueness are evident). Based on this, an algorithm
analogous to that from {Spirtes:90b} is developed.
The notion of a partially oriented graph (pog) is introduced and within this
graph the notion of p-d-separation is defined. If direct dependence test and
head-to-head meeting test are used to orient the pog then its p-d-separation is
shown to be equivalent to the Pearl's d-separation for any compatible dag.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 18:27:39 GMT"
}
]
| 1,528,416,000,000 | [
[
"Kłopotek",
"Mieczysław A.",
""
]
]
|
1806.02415 | Cheol Young Park | Cheol Young Park, Kathryn Blackmond Laskey, Paulo C. G. Costa, Shou
Matsumoto | Gaussian Mixture Reduction for Time-Constrained Approximate Inference in
Hybrid Bayesian Networks | null | Appl. Sci. 2019, 9, 2055 | 10.3390/app9102055 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid Bayesian Networks (HBNs), which contain both discrete and continuous
variables, arise naturally in many application areas (e.g., image
understanding, data fusion, medical diagnosis, fraud detection). This paper
concerns inference in an important subclass of HBNs, the conditional Gaussian
(CG) networks, in which all continuous random variables have Gaussian
distributions and all children of continuous random variables must be
continuous. Inference in CG networks can be NP-hard even for special-case
structures, such as poly-trees, where inference in discrete Bayesian networks
can be performed in polynomial time. Therefore, approximate inference is
required. In approximate inference, it is often necessary to trade off accuracy
against solution time. This paper presents an extension to the Hybrid Message
Passing inference algorithm for general CG networks and an algorithm for
optimizing its accuracy given a bound on computation time. The extended
algorithm uses Gaussian mixture reduction to prevent an exponential increase in
the number of Gaussian mixture components. The trade-off algorithm performs
pre-processing to find optimal run-time settings for the extended algorithm.
Experimental results for four CG networks compare performance of the extended
algorithm with existing algorithms and show the optimal settings for these CG
networks.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 20:38:27 GMT"
}
]
| 1,558,396,800,000 | [
[
"Park",
"Cheol Young",
""
],
[
"Laskey",
"Kathryn Blackmond",
""
],
[
"Costa",
"Paulo C. G.",
""
],
[
"Matsumoto",
"Shou",
""
]
]
|
1806.02457 | Cheol Young Park | Cheol Young Park and Kathryn Blackmond Laskey | Reference Model of Multi-Entity Bayesian Networks for Predictive
Situation Awareness | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the past quarter-century, situation awareness (SAW) has become a
critical research theme, because of its importance. Since the concept of SAW
was first introduced during World War I, various versions of SAW have been
researched and introduced. Predictive Situation Awareness (PSAW) focuses on the
ability to predict aspects of a temporally evolving situation over time. PSAW
requires a formal representation and a reasoning method using such a
representation. A Multi-Entity Bayesian Network (MEBN) is a knowledge
representation formalism combining Bayesian Networks (BN) with First-Order
Logic (FOL). MEBN can be used to represent uncertain situations (supported by
BN) as well as complex situations (supported by FOL). Also, efficient reasoning
algorithms for MEBN have been developed. MEBN can be a formal representation to
support PSAW and has been used for several PSAW systems. Although several MEBN
applications for PSAW exist, very little work can be found in the literature
that attempts to generalize a MEBN model to support PSAW. In this research, we
define a reference model for MEBN in PSAW, called a PSAW-MEBN reference model.
The PSAW-MEBN reference model enables us to easily develop a MEBN model for
PSAW by supporting the design of a MEBN model for PSAW. In this research, we
introduce two example use cases using the PSAW-MEBN reference model to develop
MEBN models to support PSAW: a Smart Manufacturing System and a Maritime Domain
Awareness System.
| [
{
"version": "v1",
"created": "Wed, 6 Jun 2018 23:17:12 GMT"
},
{
"version": "v2",
"created": "Fri, 8 Jun 2018 00:37:20 GMT"
}
]
| 1,528,675,200,000 | [
[
"Park",
"Cheol Young",
""
],
[
"Laskey",
"Kathryn Blackmond",
""
]
]
|
1806.03267 | Aboul Ella Hassanien Abo | Mohamed Yorky and Aboul Ella Hassanien | Orbital Petri Nets: A Novel Petri Net Approach | 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Petri Nets is very interesting tool for studying and simulating different
behaviors of information systems. It can be used in different applications
based on the appropriate class of Petri Nets whereas it is classical, colored
or timed Petri Nets. In this paper we introduce a new approach of Petri Nets
called orbital Petri Nets (OPN) for studying the orbital rotating systems
within a specific domain. The study investigated and analyzed OPN with
highlighting the problem of space debris collision problem as a case study. The
mathematical investigation results of two OPN models proved that space debris
collision problem can be prevented based on the new method of firing sequence
in OPN. By this study, new smart algorithms can be implemented and simulated by
orbital Petri Nets for mitigating the space debris collision problem as a next
work.
| [
{
"version": "v1",
"created": "Fri, 8 Jun 2018 16:31:46 GMT"
}
]
| 1,528,675,200,000 | [
[
"Yorky",
"Mohamed",
""
],
[
"Hassanien",
"Aboul Ella",
""
]
]
|
1806.03455 | Bradley Alexander | Brad Alexander | A Preliminary Exploration of Floating Point Grammatical Evolution | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current GP frameworks are highly effective on a range of real and simulated
benchmarks. However, due to the high dimensionality of the genotypes for GP,
the task of visualising the fitness landscape for GP search can be difficult.
This paper describes a new framework: Floating Point Grammatical Evolution
(FP-GE) which uses a single floating point genotype to encode an individual
program. This encoding permits easier visualisation of the fitness landscape
arbitrary problems by providing a way to map fitness against a single
dimension. The new framework also makes it trivially easy to apply continuous
search algorithms, such as Differential Evolution, to the search problem. In
this work, the FP-GE framework is tested against several regression problems,
visualising the search landscape for these and comparing different search
meta-heuristics.
| [
{
"version": "v1",
"created": "Sat, 9 Jun 2018 10:51:39 GMT"
}
]
| 1,528,761,600,000 | [
[
"Alexander",
"Brad",
""
]
]
|
1806.03793 | Siyuan Li | Siyuan Li, Fangda Gu, Guangxiang Zhu, Chongjie Zhang | Context-Aware Policy Reuse | Camera-ready version for AAMAS 2019 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transfer learning can greatly speed up reinforcement learning for a new task
by leveraging policies of relevant tasks.
Existing works of policy reuse either focus on only selecting a single best
source policy for transfer without considering contexts, or cannot guarantee to
learn an optimal policy for a target task.
To improve transfer efficiency and guarantee optimality, we develop a novel
policy reuse method, called Context-Aware Policy reuSe (CAPS), that enables
multi-policy transfer. Our method learns when and which source policy is best
for reuse, as well as when to terminate its reuse. CAPS provides theoretical
guarantees in convergence and optimality for both source policy selection and
target task learning. Empirical results on a grid-based navigation domain and
the Pygame Learning Environment demonstrate that CAPS significantly outperforms
other state-of-the-art policy reuse methods.
| [
{
"version": "v1",
"created": "Mon, 11 Jun 2018 03:37:43 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jun 2018 02:53:52 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Jun 2018 11:01:33 GMT"
},
{
"version": "v4",
"created": "Fri, 8 Mar 2019 14:13:36 GMT"
}
]
| 1,552,262,400,000 | [
[
"Li",
"Siyuan",
""
],
[
"Gu",
"Fangda",
""
],
[
"Zhu",
"Guangxiang",
""
],
[
"Zhang",
"Chongjie",
""
]
]
|
1806.03820 | Malayandi Palaniappan | Dhruv Malik, Malayandi Palaniappan, Jaime F. Fisac, Dylan
Hadfield-Menell, Stuart Russell, Anca D. Dragan | An Efficient, Generalized Bellman Update For Cooperative Inverse
Reinforcement Learning | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our goal is for AI systems to correctly identify and act according to their
human user's objectives. Cooperative Inverse Reinforcement Learning (CIRL)
formalizes this value alignment problem as a two-player game between a human
and robot, in which only the human knows the parameters of the reward function:
the robot needs to learn them as the interaction unfolds. Previous work showed
that CIRL can be solved as a POMDP, but with an action space size exponential
in the size of the reward parameter space. In this work, we exploit a specific
property of CIRL---the human is a full information agent---to derive an
optimality-preserving modification to the standard Bellman update; this reduces
the complexity of the problem by an exponential factor and allows us to relax
CIRL's assumption of human rationality. We apply this update to a variety of
POMDP solvers and find that it enables us to scale CIRL to non-trivial
problems, with larger reward parameter spaces, and larger action spaces for
both robot and human. In solutions to these larger problems, the human exhibits
pedagogic (teaching) behavior, while the robot interprets it as such and
attains higher value for the human.
| [
{
"version": "v1",
"created": "Mon, 11 Jun 2018 06:06:43 GMT"
}
]
| 1,528,761,600,000 | [
[
"Malik",
"Dhruv",
""
],
[
"Palaniappan",
"Malayandi",
""
],
[
"Fisac",
"Jaime F.",
""
],
[
"Hadfield-Menell",
"Dylan",
""
],
[
"Russell",
"Stuart",
""
],
[
"Dragan",
"Anca D.",
""
]
]
|
1806.04325 | Jasper C.H. Lee | Jasper C.H. Lee, Jimmy H.M. Lee, Allen Z. Zhong | Augmenting Stream Constraint Programming with Eventuality Conditions | Added proofs and an appendix containing a constraint model that was
not included in the previous version | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stream constraint programming is a recent addition to the family of
constraint programming frameworks, where variable domains are sets of infinite
streams over finite alphabets. Previous works showed promising results for its
applicability to real-world planning and control problems. In this paper,
motivated by the modelling of planning applications, we improve the
expressiveness of the framework by introducing 1) the "until" constraint, a new
construct that is adapted from Linear Temporal Logic and 2) the @ operator on
streams, a syntactic sugar for which we provide a more efficient solving
algorithm over simple desugaring. For both constructs, we propose corresponding
novel solving algorithms and prove their correctness. We present competitive
experimental results on the Missionaries and Cannibals logic puzzle and a
standard path planning application on the grid, by comparing with Apt and
Brand's method for verifying eventuality conditions using a CP approach.
| [
{
"version": "v1",
"created": "Tue, 12 Jun 2018 04:20:02 GMT"
},
{
"version": "v2",
"created": "Mon, 6 Aug 2018 20:17:43 GMT"
}
]
| 1,533,686,400,000 | [
[
"Lee",
"Jasper C. H.",
""
],
[
"Lee",
"Jimmy H. M.",
""
],
[
"Zhong",
"Allen Z.",
""
]
]
|
1806.04624 | Yangchen Pan | Yangchen Pan, Muhammad Zaheer, Adam White, Andrew Patterson, Martha
White | Organizing Experience: A Deeper Look at Replay Mechanisms for
Sample-based Planning in Continuous State Domains | IJCAI 2018 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Model-based strategies for control are critical to obtain sample efficient
learning. Dyna is a planning paradigm that naturally interleaves learning and
planning, by simulating one-step experience to update the action-value
function. This elegant planning strategy has been mostly explored in the
tabular setting. The aim of this paper is to revisit sample-based planning, in
stochastic and continuous domains with learned models. We first highlight the
flexibility afforded by a model over Experience Replay (ER). Replay-based
methods can be seen as stochastic planning methods that repeatedly sample from
a buffer of recent agent-environment interactions and perform updates to
improve data efficiency. We show that a model, as opposed to a replay buffer,
is particularly useful for specifying which states to sample from during
planning, such as predecessor states that propagate information in reverse from
a state more quickly. We introduce a semi-parametric model learning approach,
called Reweighted Experience Models (REMs), that makes it simple to sample next
states or predecessors. We demonstrate that REM-Dyna exhibits similar
advantages over replay-based methods in learning in continuous state problems,
and that the performance gap grows when moving to stochastic domains, of
increasing size.
| [
{
"version": "v1",
"created": "Tue, 12 Jun 2018 16:07:31 GMT"
}
]
| 1,528,848,000,000 | [
[
"Pan",
"Yangchen",
""
],
[
"Zaheer",
"Muhammad",
""
],
[
"White",
"Adam",
""
],
[
"Patterson",
"Andrew",
""
],
[
"White",
"Martha",
""
]
]
|
1806.04718 | Ahmed Khalifa | Ahmed Khalifa, Scott Lee, Andy Nealen, Julian Togelius | Talakat: Bullet Hell Generation through Constrained Map-Elites | The paper will be published in GECCO 2018 | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | We describe a search-based approach to generating new levels for bullet hell
games, which are action games characterized by and requiring avoidance of a
very large amount of projectiles. Levels are represented using a
domain-specific description language, and search in the space defined by this
language is performed by a novel variant of the Map-Elites algorithm which
incorporates a feasible- infeasible approach to constraint satisfaction.
Simulation-based evaluation is used to gauge the fitness of levels, using an
agent based on best-first search. The performance of the agent can be tuned
according to the two dimensions of strategy and dexterity, making it possible
to search for level configurations that require a specific combination of both.
As far as we know, this paper describes the first generator for this game
genre, and includes several algorithmic innovations.
| [
{
"version": "v1",
"created": "Tue, 12 Jun 2018 19:02:19 GMT"
},
{
"version": "v2",
"created": "Thu, 14 Jun 2018 01:38:52 GMT"
}
]
| 1,529,020,800,000 | [
[
"Khalifa",
"Ahmed",
""
],
[
"Lee",
"Scott",
""
],
[
"Nealen",
"Andy",
""
],
[
"Togelius",
"Julian",
""
]
]
|
1806.04915 | Dimiter Dobrev | Dimiter Dobrev | The IQ of Artificial Intelligence | null | Serdica Journal of Computing, Vol. 13, Number 1-2, 2019, pp.41-70 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All it takes to identify the computer programs which are Artificial
Intelligence is to give them a test and award AI to those that pass the test.
Let us say that the scores they earn at the test will be called IQ. We cannot
pinpoint a minimum IQ threshold that a program has to cover in order to be AI,
however, we will choose a certain value. Thus, our definition for AI will be
any program the IQ of which is above the chosen value. While this idea has
already been implemented in [3], here we will revisit this construct in order
to introduce certain improvements.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 09:29:42 GMT"
}
]
| 1,602,460,800,000 | [
[
"Dobrev",
"Dimiter",
""
]
]
|
1806.04959 | Hoda Heidari | Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, and Andreas Krause | Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated
Decision Making | Conference: Thirty-second Conference on Neural Information Processing
Systems (NIPS 2018) | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We draw attention to an important, yet largely overlooked aspect of
evaluating fairness for automated decision making systems---namely risk and
welfare considerations. Our proposed family of measures corresponds to the
long-established formulations of cardinal social welfare in economics, and is
justified by the Rawlsian conception of fairness behind a veil of ignorance.
The convex formulation of our welfare-based measures of fairness allows us to
integrate them as a constraint into any convex loss minimization pipeline. Our
empirical analysis reveals interesting trade-offs between our proposal and (a)
prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of
individual fairness. Furthermore and perhaps most importantly, our work
provides both heuristic justification and empirical evidence suggesting that a
lower-bound on our measures often leads to bounded inequality in algorithmic
outcomes; hence presenting the first computationally feasible mechanism for
bounding individual-level inequality.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 11:36:05 GMT"
},
{
"version": "v2",
"created": "Mon, 29 Oct 2018 09:39:14 GMT"
},
{
"version": "v3",
"created": "Wed, 28 Nov 2018 13:29:41 GMT"
},
{
"version": "v4",
"created": "Fri, 11 Jan 2019 11:36:36 GMT"
}
]
| 1,547,424,000,000 | [
[
"Heidari",
"Hoda",
""
],
[
"Ferrari",
"Claudio",
""
],
[
"Gummadi",
"Krishna P.",
""
],
[
"Krause",
"Andreas",
""
]
]
|
1806.05106 | Frank Glavin | Frank G. Glavin and Michael G. Madden | DRE-Bot: A Hierarchical First Person Shooter Bot Using Multiple
Sarsa({\lambda}) Reinforcement Learners | 17th International Conference on Computer Games (CGAMES) 2012 | In Computer Games (CGAMES), 2012 17th International Conference on,
pp. 148-152. IEEE, 2012 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper describes an architecture for controlling non-player characters
(NPC) in the First Person Shooter (FPS) game Unreal Tournament 2004.
Specifically, the DRE-Bot architecture is made up of three reinforcement
learners, Danger, Replenish and Explore, which use the tabular Sarsa({\lambda})
algorithm. This algorithm enables the NPC to learn through trial and error
building up experience over time in an approach inspired by human learning.
Experimentation is carried to measure the performance of DRE-Bot when competing
against fixed strategy bots that ship with the game. The discount parameter,
{\gamma}, and the trace parameter, {\lambda}, are also varied to see if their
values have an effect on the performance.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 15:19:34 GMT"
}
]
| 1,528,934,400,000 | [
[
"Glavin",
"Frank G.",
""
],
[
"Madden",
"Michael G.",
""
]
]
|
1806.05108 | Theophanes Raptis Mr | Theophanes E. Raptis | Holographic Automata for Ambient Immersive A. I. via Reservoir Computing | 14 p., 2 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove the existence of a semilinear representation of Cellular Automata
(CA) with the introduction of multiple convolution kernels. Examples of the
technique are presented for rules akin to the "edge-of-chaos" including the
Turing universal rule 110 for further utilization in the area of reservoir
computing. We also examine the significance of their dual representation on a
frequency or wavelength domain as a superposition of plane waves for
distributed computing applications including a new proposal for a "Hologrid"
that could be realized with present Wi-Fi,Li-Fi technologies.
| [
{
"version": "v1",
"created": "Sat, 9 Jun 2018 13:01:39 GMT"
},
{
"version": "v2",
"created": "Wed, 20 Jun 2018 13:25:20 GMT"
}
]
| 1,529,539,200,000 | [
[
"Raptis",
"Theophanes E.",
""
]
]
|
1806.05117 | Frank Glavin | Frank G. Glavin and Michael G. Madden | Learning to Shoot in First Person Shooter Games by Stabilizing Actions
and Clustering Rewards for Reinforcement Learning | IEEE Conference on Computational Intelligence and Games (CIG), 2015 | In Conference on Computational Intelligence and Games, pp.
344-351. 2015 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While reinforcement learning (RL) has been applied to turn-based board games
for many years, more complex games involving decision-making in real-time are
beginning to receive more attention. A challenge in such environments is that
the time that elapses between deciding to take an action and receiving a reward
based on its outcome can be longer than the interval between successive
decisions. We explore this in the context of a non-player character (NPC) in a
modern first-person shooter game. Such games take place in 3D environments
where players, both human and computer-controlled, compete by engaging in
combat and completing task objectives. We investigate the use of RL to enable
NPCs to gather experience from game-play and improve their shooting skill over
time from a reward signal based on the damage caused to opponents. We propose a
new method for RL updates and reward calculations, in which the updates are
carried out periodically, after each shooting encounter has ended, and a new
weighted-reward mechanism is used which increases the reward applied to actions
that lead to damaging the opponent in successive hits in what we term "hit
clusters".
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 15:41:34 GMT"
}
]
| 1,528,934,400,000 | [
[
"Glavin",
"Frank G.",
""
],
[
"Madden",
"Michael G.",
""
]
]
|
1806.05234 | Daniele Funaro | Daniele Funaro | Understanding the Meaning of Understanding | 9 pages | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can we train a machine to detect if another machine has understood a concept?
In principle, this is possible by conducting tests on the subject of that
concept. However we want this procedure to be done by avoiding direct
questions. In other words, we would like to isolate the absolute meaning of an
abstract idea by putting it into a class of equivalence, hence without adopting
straight definitions or showing how this idea "works" in practice. We discuss
the metaphysical implications hidden in the above question, with the aim of
providing a plausible reference framework.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 19:26:55 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Feb 2019 13:31:59 GMT"
}
]
| 1,549,411,200,000 | [
[
"Funaro",
"Daniele",
""
]
]
|
1806.05292 | Aleksandr Panov | Aleksandr I. Panov, Aleksey Skrynnik | Automatic formation of the structure of abstract machines in
hierarchical reinforcement learning with state clustering | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a new approach to hierarchy formation and task decomposition in
hierarchical reinforcement learning. Our method is based on the Hierarchy Of
Abstract Machines (HAM) framework because HAM approach is able to design
efficient controllers that will realize specific behaviors in real robots. The
key to our algorithm is the introduction of the internal or "mental"
environment in which the state represents the structure of the HAM hierarchy.
The internal action in this environment leads to changes the hierarchy of HAMs.
We propose the classical Q-learning procedure in the internal environment which
allows the agent to obtain an optimal hierarchy. We extends the HAM framework
by adding on-model approach to select the appropriate sub-machine to execute
action sequences for certain class of external environment states. Preliminary
experiments demonstrated the prospects of the method.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 22:40:49 GMT"
}
]
| 1,529,020,800,000 | [
[
"Panov",
"Aleksandr I.",
""
],
[
"Skrynnik",
"Aleksey",
""
]
]
|
1806.05415 | Alberto Maria Metelli | Alberto Maria Metelli, Mirco Mutti and Marcello Restelli | Configurable Markov Decision Processes | null | Proceedings of the 35 th International Conference on Machine
Learning, Stockholm, Sweden, PMLR 80, 2018 | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many real-world problems, there is the possibility to configure, to a
limited extent, some environmental parameters to improve the performance of a
learning agent. In this paper, we propose a novel framework, Configurable
Markov Decision Processes (Conf-MDPs), to model this new type of interaction
with the environment. Furthermore, we provide a new learning algorithm, Safe
Policy-Model Iteration (SPMI), to jointly and adaptively optimize the policy
and the environment configuration. After having introduced our approach and
derived some theoretical results, we present the experimental evaluation in two
explicative problems to show the benefits of the environment configurability on
the performance of the learned policy.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2018 08:54:38 GMT"
}
]
| 1,529,020,800,000 | [
[
"Metelli",
"Alberto Maria",
""
],
[
"Mutti",
"Mirco",
""
],
[
"Restelli",
"Marcello",
""
]
]
|
1806.05554 | Frank Glavin | Frank G. Glavin and Michael G. Madden | Adaptive Shooting for Bots in First Person Shooter Games Using
Reinforcement Learning | IEEE Transactions on Computational Intelligence and AI in Games
(2015) | Glavin, Frank G., and Michael G. Madden. "Adaptive shooting for
bots in first person shooter games using reinforcement learning." IEEE
Transactions on Computational Intelligence and AI in Games 7, no. 2: 180-192.
(2015) | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In current state-of-the-art commercial first person shooter games, computer
controlled bots, also known as non player characters, can often be easily
distinguishable from those controlled by humans. Tell-tale signs such as failed
navigation, "sixth sense" knowledge of human players' whereabouts and
deterministic, scripted behaviors are some of the causes of this. We propose,
however, that one of the biggest indicators of non humanlike behavior in these
games can be found in the weapon shooting capability of the bot. Consistently
perfect accuracy and "locking on" to opponents in their visual field from any
distance are indicative capabilities of bots that are not found in human
players. Traditionally, the bot is handicapped in some way with either a timed
reaction delay or a random perturbation to its aim, which doesn't adapt or
improve its technique over time. We hypothesize that enabling the bot to learn
the skill of shooting through trial and error, in the same way a human player
learns, will lead to greater variation in game-play and produce less
predictable non player characters. This paper describes a reinforcement
learning shooting mechanism for adapting shooting over time based on a dynamic
reward signal from the amount of damage caused to opponents.
| [
{
"version": "v1",
"created": "Thu, 14 Jun 2018 14:00:14 GMT"
}
]
| 1,529,020,800,000 | [
[
"Glavin",
"Frank G.",
""
],
[
"Madden",
"Michael G.",
""
]
]
|
1806.05898 | Miquel Junyent | Miquel Junyent, Anders Jonsson, Vicen\c{c} G\'omez | Improving width-based planning with compact policies | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Optimal action selection in decision problems characterized by sparse,
delayed rewards is still an open challenge. For these problems, current deep
reinforcement learning methods require enormous amounts of data to learn
controllers that reach human-level performance. In this work, we propose a
method that interleaves planning and learning to address this issue. The
planning step hinges on the Iterated-Width (IW) planner, a state of the art
planner that makes explicit use of the state representation to perform
structured exploration. IW is able to scale up to problems independently of the
size of the state space. From the state-actions visited by IW, the learning
step estimates a compact policy, which in turn is used to guide the planning
step. The type of exploration used by our method is radically different than
the standard random exploration used in RL. We evaluate our method in simple
problems where we show it to have superior performance than the
state-of-the-art reinforcement learning algorithms A2C and Alpha Zero. Finally,
we present preliminary results in a subset of the Atari games suite.
| [
{
"version": "v1",
"created": "Fri, 15 Jun 2018 10:41:23 GMT"
}
]
| 1,529,280,000,000 | [
[
"Junyent",
"Miquel",
""
],
[
"Jonsson",
"Anders",
""
],
[
"Gómez",
"Vicenç",
""
]
]
|
1806.06505 | Ildefons Magrans de Abril | Ildefons Magrans de Abril, Ryota Kanai | A unified strategy for implementing curiosity and empowerment driven
reinforcement learning | 13 pages, 8 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although there are many approaches to implement intrinsically motivated
artificial agents, the combined usage of multiple intrinsic drives remains
still a relatively unexplored research area. Specifically, we hypothesize that
a mechanism capable of quantifying and controlling the evolution of the
information flow between the agent and the environment could be the fundamental
component for implementing a higher degree of autonomy into artificial
intelligent agents. This paper propose a unified strategy for implementing two
semantically orthogonal intrinsic motivations: curiosity and empowerment.
Curiosity reward informs the agent about the relevance of a recent agent
action, whereas empowerment is implemented as the opposite information flow
from the agent to the environment that quantifies the agent's potential of
controlling its own future. We show that an additional homeostatic drive is
derived from the curiosity reward, which generalizes and enhances the
information gain of a classical curious/heterostatic reinforcement learning
agent. We show how a shared internal model by curiosity and empowerment
facilitates a more efficient training of the empowerment function. Finally, we
discuss future directions for further leveraging the interplay between these
two intrinsic rewards.
| [
{
"version": "v1",
"created": "Mon, 18 Jun 2018 05:58:04 GMT"
}
]
| 1,529,366,400,000 | [
[
"de Abril",
"Ildefons Magrans",
""
],
[
"Kanai",
"Ryota",
""
]
]
|
1806.06685 | San Pham Ms | Matthieu De Laere, San Tu Pham and Patrick De Causmaecker | Solving the Steiner Tree Problem in graphs with Variable Neighborhood
Descent | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Steiner Tree Problem (STP) in graphs is an important problem with various
applications in many areas such as design of integrated circuits, evolution
theory, networking, etc. In this paper, we propose an algorithm to solve the
STP. The algorithm includes a reducer and a solver using Variable Neighborhood
Descent (VND), interacting with each other during the search. New constructive
heuristics and a vertex score system for intensification purpose are proposed.
The algorithm is tested on a set of benchmarks which shows encouraging results.
| [
{
"version": "v1",
"created": "Wed, 13 Jun 2018 21:39:20 GMT"
}
]
| 1,529,366,400,000 | [
[
"De Laere",
"Matthieu",
""
],
[
"Pham",
"San Tu",
""
],
[
"De Causmaecker",
"Patrick",
""
]
]
|
1806.07037 | Shota Motoura | Shota Motoura, Kazeto Yamamoto, Shumpei Kubosawa, Takashi Onishi | Translating MFM into FOL: towards plant operation planning | null | Proceedings of the Third International Workshop on Functional
Modelling for Design and Operation of Engineering Systems, 24 - 25 May, 2018,
Kurashiki, Japan | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a method to translate multilevel flow modeling (MFM) into
a first-order language (FOL), which enables the utilisation of logical
techniques, such as inference engines and abductive reasoners. An example of
this is a planning task for a toy plant that can be solved in FOL using
abduction. In addition, owing to the expressivity of FOL, the language is
capable of describing actions and their preconditions. This allows the
derivation of procedures consisting of multiple actions.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2018 04:24:16 GMT"
}
]
| 1,529,452,800,000 | [
[
"Motoura",
"Shota",
""
],
[
"Yamamoto",
"Kazeto",
""
],
[
"Kubosawa",
"Shumpei",
""
],
[
"Onishi",
"Takashi",
""
]
]
|
1806.07135 | Luca Pulina | Arthur Bit-Monnot, Francesco Leofante, Luca Pulina, Erika Abraham, and
Armando Tacchella | SMarTplan: a Task Planner for Smart Factories | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Smart factories are on the verge of becoming the new industrial paradigm,
wherein optimization permeates all aspects of production, from concept
generation to sales. To fully pursue this paradigm, flexibility in the
production means as well as in their timely organization is of paramount
importance. AI is planning a major role in this transition, but the scenarios
encountered in practice might be challenging for current tools. Task planning
is one example where AI enables more efficient and flexible operation through
an online automated adaptation and rescheduling of the activities to cope with
new operational constraints and demands.
In this paper we present SMarTplan, a task planner specifically conceived to
deal with real-world scenarios in the emerging smart factory paradigm.
Including both special-purpose and general-purpose algorithms, SMarTplan is
based on current automated reasoning technology and it is designed to tackle
complex application domains. In particular, we show its effectiveness on a
logistic scenario, by comparing its specialized version with the general
purpose one, and extending the comparison to other state-of-the-art task
planners.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2018 10:01:48 GMT"
}
]
| 1,529,452,800,000 | [
[
"Bit-Monnot",
"Arthur",
""
],
[
"Leofante",
"Francesco",
""
],
[
"Pulina",
"Luca",
""
],
[
"Abraham",
"Erika",
""
],
[
"Tacchella",
"Armando",
""
]
]
|
1806.07439 | Yun Long | Yun Long, Xueyuan She, Saibal Mukhopadhyay | HybridNet: Integrating Model-based and Data-driven Learning to Predict
Evolution of Dynamical Systems | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The robotic systems continuously interact with complex dynamical systems in
the physical world. Reliable predictions of spatiotemporal evolution of these
dynamical systems, with limited knowledge of system dynamics, are crucial for
autonomous operation. In this paper, we present HybridNet, a framework that
integrates data-driven deep learning and model-driven computation to reliably
predict spatiotemporal evolution of a dynamical systems even with in-exact
knowledge of their parameters. A data-driven deep neural network (DNN) with
Convolutional LSTM (ConvLSTM) as the backbone is employed to predict the
time-varying evolution of the external forces/perturbations. On the other hand,
the model-driven computation is performed using Cellular Neural Network (CeNN),
a neuro-inspired algorithm to model dynamical systems defined by coupled
partial differential equations (PDEs). CeNN converts the intricate numerical
computation into a series of convolution operations, enabling a trainable PDE
solver. With a feedback control loop, HybridNet can learn the physical
parameters governing the system's dynamics in real-time, and accordingly adapt
the computation models to enhance prediction accuracy for time-evolving
dynamical systems. The experimental results on two dynamical systems, namely,
heat convection-diffusion system, and fluid dynamical system, demonstrate that
the HybridNet produces higher accuracy than the state-of-the-art deep learning
based approach.
| [
{
"version": "v1",
"created": "Tue, 19 Jun 2018 19:32:42 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Jan 2019 07:12:14 GMT"
}
]
| 1,546,905,600,000 | [
[
"Long",
"Yun",
""
],
[
"She",
"Xueyuan",
""
],
[
"Mukhopadhyay",
"Saibal",
""
]
]
|
1806.07552 | Richard Tomsett | Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo
Chakraborty | Interpretable to Whom? A Role-based Model for Analyzing Interpretable
Machine Learning Systems | presented at 2018 ICML Workshop on Human Interpretability in Machine
Learning (WHI 2018), Stockholm, Sweden | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several researchers have argued that a machine learning system's
interpretability should be defined in relation to a specific agent or task: we
should not ask if the system is interpretable, but to whom is it interpretable.
We describe a model intended to help answer this question, by identifying
different roles that agents can fulfill in relation to the machine learning
system. We illustrate the use of our model in a variety of scenarios, exploring
how an agent's role influences its goals, and the implications for defining
interpretability. Finally, we make suggestions for how our model could be
useful to interpretability researchers, system developers, and regulatory
bodies auditing machine learning systems.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2018 04:52:33 GMT"
}
]
| 1,529,539,200,000 | [
[
"Tomsett",
"Richard",
""
],
[
"Braines",
"Dave",
""
],
[
"Harborne",
"Dan",
""
],
[
"Preece",
"Alun",
""
],
[
"Chakraborty",
"Supriyo",
""
]
]
|
1806.07637 | Frank Glavin | Frank G. Glavin and Michael G. Madden | Skilled Experience Catalogue: A Skill-Balancing Mechanism for Non-Player
Characters using Reinforcement Learning | IEEE Conference on Computational Intelligence and Games (CIG). August
2018 | IEEE Conference on Computational Intelligence and Games (CIG18),
Maastricht, The Netherlands, (2018) | 10.1109/CIG.2018.8490405 | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we introduce a skill-balancing mechanism for adversarial
non-player characters (NPCs), called Skilled Experience Catalogue (SEC). The
objective of this mechanism is to approximately match the skill level of an NPC
to an opponent in real-time. We test the technique in the context of a
First-Person Shooter (FPS) game. Specifically, the technique adjusts a
reinforcement learning NPC's proficiency with a weapon based on its current
performance against an opponent. Firstly, a catalogue of experience, in the
form of stored learning policies, is built up by playing a series of training
games. Once the NPC has been sufficiently trained, the catalogue acts as a
timeline of experience with incremental knowledge milestones in the form of
stored learning policies. If the NPC is performing poorly, it can jump to a
later stage in the learning timeline to be equipped with more informed
decision-making. Likewise, if it is performing significantly better than the
opponent, it will jump to an earlier stage. The NPC continues to learn in
real-time using reinforcement learning but its policy is adjusted, as required,
by loading the most suitable milestones for the current circumstances.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2018 09:41:54 GMT"
}
]
| 1,541,980,800,000 | [
[
"Glavin",
"Frank G.",
""
],
[
"Madden",
"Michael G.",
""
]
]
|
1806.07685 | Ivo D\"untsch | Ivo D\"untsch, G\"unther Gediga, Hui Wang | Approximation by filter functions | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this exploratory article, we draw attention to the common formal ground
among various estimators such as the belief functions of evidence theory and
their relatives, approximation quality of rough set theory, and contextual
probability. The unifying concept will be a general filter function composed of
a basic probability and a weighting which varies according to the problem at
hand. To compare the various filter functions we conclude with a simulation
study with an example from the area of item response theory.
| [
{
"version": "v1",
"created": "Wed, 20 Jun 2018 12:09:52 GMT"
}
]
| 1,529,539,200,000 | [
[
"Düntsch",
"Ivo",
""
],
[
"Gediga",
"Günther",
""
],
[
"Wang",
"Hui",
""
]
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.